Test Report: KVM_Linux_crio 19888

                    
                      b240f9d77986126e9714444475c34e6cc49a474f:2024-12-10:37414
                    
                

Test fail (32/321)

Order failed test Duration
36 TestAddons/parallel/Ingress 151.34
38 TestAddons/parallel/MetricsServer 351.67
47 TestAddons/StoppedEnableDisable 154.27
116 TestFunctional/parallel/ImageCommands/ImageListShort 2.25
168 TestMultiControlPlane/serial/StopSecondaryNode 141.46
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.63
170 TestMultiControlPlane/serial/RestartSecondaryNode 6.44
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.29
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 354.63
175 TestMultiControlPlane/serial/StopCluster 141.81
235 TestMultiNode/serial/RestartKeepsNodes 319.62
237 TestMultiNode/serial/StopMultiNode 145.26
244 TestPreload 267.37
252 TestKubernetesUpgrade 525.65
329 TestStartStop/group/old-k8s-version/serial/FirstStart 278.46
350 TestStartStop/group/no-preload/serial/Stop 139
352 TestStartStop/group/embed-certs/serial/Stop 138.93
355 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.16
356 TestStartStop/group/old-k8s-version/serial/DeployApp 0.5
357 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 103.52
358 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.39
359 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
362 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
366 TestStartStop/group/old-k8s-version/serial/SecondStart 743.65
367 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.11
368 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.24
369 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.15
370 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.39
371 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 435.12
372 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 420.88
373 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 285.24
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 128.16
x
+
TestAddons/parallel/Ingress (151.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-495659 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-495659 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-495659 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8e3e4108-ab04-4c22-9a48-b5b6431d743f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8e3e4108-ab04-4c22-9a48-b5b6431d743f] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004405297s
I1209 22:36:15.092930   26253 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-495659 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-495659 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.800342295s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-495659 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-495659 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.123
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-495659 -n addons-495659
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-495659 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-495659 logs -n 25: (1.242036621s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 09 Dec 24 22:32 UTC | 09 Dec 24 22:32 UTC |
	| delete  | -p download-only-578923                                                                     | download-only-578923 | jenkins | v1.34.0 | 09 Dec 24 22:32 UTC | 09 Dec 24 22:32 UTC |
	| delete  | -p download-only-091652                                                                     | download-only-091652 | jenkins | v1.34.0 | 09 Dec 24 22:32 UTC | 09 Dec 24 22:32 UTC |
	| delete  | -p download-only-578923                                                                     | download-only-578923 | jenkins | v1.34.0 | 09 Dec 24 22:32 UTC | 09 Dec 24 22:32 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-501847 | jenkins | v1.34.0 | 09 Dec 24 22:32 UTC |                     |
	|         | binary-mirror-501847                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39247                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-501847                                                                     | binary-mirror-501847 | jenkins | v1.34.0 | 09 Dec 24 22:32 UTC | 09 Dec 24 22:32 UTC |
	| addons  | enable dashboard -p                                                                         | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:32 UTC |                     |
	|         | addons-495659                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:32 UTC |                     |
	|         | addons-495659                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-495659 --wait=true                                                                | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:32 UTC | 09 Dec 24 22:35 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-495659 addons disable                                                                | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:35 UTC | 09 Dec 24 22:35 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-495659 addons disable                                                                | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:35 UTC | 09 Dec 24 22:35 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:35 UTC | 09 Dec 24 22:35 UTC |
	|         | -p addons-495659                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-495659 addons disable                                                                | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:35 UTC | 09 Dec 24 22:35 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-495659 addons                                                                        | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:35 UTC | 09 Dec 24 22:35 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-495659 addons disable                                                                | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:35 UTC | 09 Dec 24 22:36 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-495659 ip                                                                            | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:36 UTC | 09 Dec 24 22:36 UTC |
	| addons  | addons-495659 addons disable                                                                | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:36 UTC | 09 Dec 24 22:36 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-495659 addons                                                                        | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:36 UTC | 09 Dec 24 22:36 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-495659 ssh cat                                                                       | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:36 UTC | 09 Dec 24 22:36 UTC |
	|         | /opt/local-path-provisioner/pvc-d50f59cb-64dd-4a2e-b94c-429fc96e21da_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-495659 addons disable                                                                | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:36 UTC | 09 Dec 24 22:36 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-495659 addons                                                                        | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:36 UTC | 09 Dec 24 22:36 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-495659 ssh curl -s                                                                   | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:36 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-495659 addons                                                                        | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:36 UTC | 09 Dec 24 22:36 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-495659 addons                                                                        | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:36 UTC | 09 Dec 24 22:36 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-495659 ip                                                                            | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:38 UTC | 09 Dec 24 22:38 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 22:32:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 22:32:17.751936   26899 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:32:17.752493   26899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:32:17.752512   26899 out.go:358] Setting ErrFile to fd 2...
	I1209 22:32:17.752519   26899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:32:17.752941   26899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 22:32:17.753776   26899 out.go:352] Setting JSON to false
	I1209 22:32:17.754593   26899 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4489,"bootTime":1733779049,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 22:32:17.754691   26899 start.go:139] virtualization: kvm guest
	I1209 22:32:17.756536   26899 out.go:177] * [addons-495659] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 22:32:17.758116   26899 notify.go:220] Checking for updates...
	I1209 22:32:17.758129   26899 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 22:32:17.759348   26899 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 22:32:17.760632   26899 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:32:17.761843   26899 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:32:17.763110   26899 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 22:32:17.764437   26899 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 22:32:17.765761   26899 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 22:32:17.797653   26899 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 22:32:17.798937   26899 start.go:297] selected driver: kvm2
	I1209 22:32:17.798949   26899 start.go:901] validating driver "kvm2" against <nil>
	I1209 22:32:17.798960   26899 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 22:32:17.799752   26899 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:32:17.799867   26899 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 22:32:17.814913   26899 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 22:32:17.814964   26899 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 22:32:17.815244   26899 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 22:32:17.815276   26899 cni.go:84] Creating CNI manager for ""
	I1209 22:32:17.815336   26899 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 22:32:17.815348   26899 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 22:32:17.815418   26899 start.go:340] cluster config:
	{Name:addons-495659 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-495659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:32:17.815986   26899 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:32:17.817830   26899 out.go:177] * Starting "addons-495659" primary control-plane node in "addons-495659" cluster
	I1209 22:32:17.819404   26899 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:32:17.819437   26899 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 22:32:17.819447   26899 cache.go:56] Caching tarball of preloaded images
	I1209 22:32:17.819509   26899 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 22:32:17.819526   26899 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 22:32:17.819824   26899 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/config.json ...
	I1209 22:32:17.819847   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/config.json: {Name:mk956352758e5b2bd9e07f8704d8de74b0230bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:17.819991   26899 start.go:360] acquireMachinesLock for addons-495659: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 22:32:17.820036   26899 start.go:364] duration metric: took 31.824µs to acquireMachinesLock for "addons-495659"
	I1209 22:32:17.820053   26899 start.go:93] Provisioning new machine with config: &{Name:addons-495659 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-495659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:32:17.820112   26899 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 22:32:17.822592   26899 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1209 22:32:17.822749   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:32:17.822781   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:32:17.837004   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38047
	I1209 22:32:17.837441   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:32:17.838048   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:32:17.838064   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:32:17.838468   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:32:17.838654   26899 main.go:141] libmachine: (addons-495659) Calling .GetMachineName
	I1209 22:32:17.838802   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:32:17.838938   26899 start.go:159] libmachine.API.Create for "addons-495659" (driver="kvm2")
	I1209 22:32:17.838961   26899 client.go:168] LocalClient.Create starting
	I1209 22:32:17.839002   26899 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem
	I1209 22:32:17.966324   26899 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem
	I1209 22:32:18.100691   26899 main.go:141] libmachine: Running pre-create checks...
	I1209 22:32:18.100722   26899 main.go:141] libmachine: (addons-495659) Calling .PreCreateCheck
	I1209 22:32:18.101180   26899 main.go:141] libmachine: (addons-495659) Calling .GetConfigRaw
	I1209 22:32:18.101590   26899 main.go:141] libmachine: Creating machine...
	I1209 22:32:18.101605   26899 main.go:141] libmachine: (addons-495659) Calling .Create
	I1209 22:32:18.101776   26899 main.go:141] libmachine: (addons-495659) Creating KVM machine...
	I1209 22:32:18.103072   26899 main.go:141] libmachine: (addons-495659) DBG | found existing default KVM network
	I1209 22:32:18.103817   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:18.103669   26921 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
	I1209 22:32:18.103837   26899 main.go:141] libmachine: (addons-495659) DBG | created network xml: 
	I1209 22:32:18.103846   26899 main.go:141] libmachine: (addons-495659) DBG | <network>
	I1209 22:32:18.103851   26899 main.go:141] libmachine: (addons-495659) DBG |   <name>mk-addons-495659</name>
	I1209 22:32:18.103859   26899 main.go:141] libmachine: (addons-495659) DBG |   <dns enable='no'/>
	I1209 22:32:18.103871   26899 main.go:141] libmachine: (addons-495659) DBG |   
	I1209 22:32:18.103883   26899 main.go:141] libmachine: (addons-495659) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1209 22:32:18.103898   26899 main.go:141] libmachine: (addons-495659) DBG |     <dhcp>
	I1209 22:32:18.103906   26899 main.go:141] libmachine: (addons-495659) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1209 22:32:18.103915   26899 main.go:141] libmachine: (addons-495659) DBG |     </dhcp>
	I1209 22:32:18.103920   26899 main.go:141] libmachine: (addons-495659) DBG |   </ip>
	I1209 22:32:18.103927   26899 main.go:141] libmachine: (addons-495659) DBG |   
	I1209 22:32:18.103933   26899 main.go:141] libmachine: (addons-495659) DBG | </network>
	I1209 22:32:18.103940   26899 main.go:141] libmachine: (addons-495659) DBG | 
	I1209 22:32:18.110236   26899 main.go:141] libmachine: (addons-495659) DBG | trying to create private KVM network mk-addons-495659 192.168.39.0/24...
	I1209 22:32:18.172448   26899 main.go:141] libmachine: (addons-495659) DBG | private KVM network mk-addons-495659 192.168.39.0/24 created
	I1209 22:32:18.172478   26899 main.go:141] libmachine: (addons-495659) Setting up store path in /home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659 ...
	I1209 22:32:18.172534   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:18.172450   26921 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:32:18.172573   26899 main.go:141] libmachine: (addons-495659) Building disk image from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 22:32:18.172597   26899 main.go:141] libmachine: (addons-495659) Downloading /home/jenkins/minikube-integration/19888-18950/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 22:32:18.422402   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:18.422252   26921 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa...
	I1209 22:32:18.636573   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:18.636427   26921 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/addons-495659.rawdisk...
	I1209 22:32:18.636606   26899 main.go:141] libmachine: (addons-495659) DBG | Writing magic tar header
	I1209 22:32:18.636617   26899 main.go:141] libmachine: (addons-495659) DBG | Writing SSH key tar header
	I1209 22:32:18.636626   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:18.636538   26921 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659 ...
	I1209 22:32:18.636644   26899 main.go:141] libmachine: (addons-495659) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659
	I1209 22:32:18.636686   26899 main.go:141] libmachine: (addons-495659) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659 (perms=drwx------)
	I1209 22:32:18.636699   26899 main.go:141] libmachine: (addons-495659) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines (perms=drwxr-xr-x)
	I1209 22:32:18.636710   26899 main.go:141] libmachine: (addons-495659) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines
	I1209 22:32:18.636724   26899 main.go:141] libmachine: (addons-495659) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:32:18.636731   26899 main.go:141] libmachine: (addons-495659) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950
	I1209 22:32:18.636740   26899 main.go:141] libmachine: (addons-495659) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube (perms=drwxr-xr-x)
	I1209 22:32:18.636752   26899 main.go:141] libmachine: (addons-495659) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950 (perms=drwxrwxr-x)
	I1209 22:32:18.636758   26899 main.go:141] libmachine: (addons-495659) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 22:32:18.636770   26899 main.go:141] libmachine: (addons-495659) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 22:32:18.636782   26899 main.go:141] libmachine: (addons-495659) Creating domain...
	I1209 22:32:18.636791   26899 main.go:141] libmachine: (addons-495659) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 22:32:18.636801   26899 main.go:141] libmachine: (addons-495659) DBG | Checking permissions on dir: /home/jenkins
	I1209 22:32:18.636809   26899 main.go:141] libmachine: (addons-495659) DBG | Checking permissions on dir: /home
	I1209 22:32:18.636820   26899 main.go:141] libmachine: (addons-495659) DBG | Skipping /home - not owner
	I1209 22:32:18.637858   26899 main.go:141] libmachine: (addons-495659) define libvirt domain using xml: 
	I1209 22:32:18.637882   26899 main.go:141] libmachine: (addons-495659) <domain type='kvm'>
	I1209 22:32:18.637892   26899 main.go:141] libmachine: (addons-495659)   <name>addons-495659</name>
	I1209 22:32:18.637900   26899 main.go:141] libmachine: (addons-495659)   <memory unit='MiB'>4000</memory>
	I1209 22:32:18.637909   26899 main.go:141] libmachine: (addons-495659)   <vcpu>2</vcpu>
	I1209 22:32:18.637915   26899 main.go:141] libmachine: (addons-495659)   <features>
	I1209 22:32:18.637927   26899 main.go:141] libmachine: (addons-495659)     <acpi/>
	I1209 22:32:18.637936   26899 main.go:141] libmachine: (addons-495659)     <apic/>
	I1209 22:32:18.637945   26899 main.go:141] libmachine: (addons-495659)     <pae/>
	I1209 22:32:18.637956   26899 main.go:141] libmachine: (addons-495659)     
	I1209 22:32:18.637968   26899 main.go:141] libmachine: (addons-495659)   </features>
	I1209 22:32:18.637976   26899 main.go:141] libmachine: (addons-495659)   <cpu mode='host-passthrough'>
	I1209 22:32:18.637984   26899 main.go:141] libmachine: (addons-495659)   
	I1209 22:32:18.637991   26899 main.go:141] libmachine: (addons-495659)   </cpu>
	I1209 22:32:18.638003   26899 main.go:141] libmachine: (addons-495659)   <os>
	I1209 22:32:18.638011   26899 main.go:141] libmachine: (addons-495659)     <type>hvm</type>
	I1209 22:32:18.638019   26899 main.go:141] libmachine: (addons-495659)     <boot dev='cdrom'/>
	I1209 22:32:18.638033   26899 main.go:141] libmachine: (addons-495659)     <boot dev='hd'/>
	I1209 22:32:18.638047   26899 main.go:141] libmachine: (addons-495659)     <bootmenu enable='no'/>
	I1209 22:32:18.638057   26899 main.go:141] libmachine: (addons-495659)   </os>
	I1209 22:32:18.638067   26899 main.go:141] libmachine: (addons-495659)   <devices>
	I1209 22:32:18.638078   26899 main.go:141] libmachine: (addons-495659)     <disk type='file' device='cdrom'>
	I1209 22:32:18.638095   26899 main.go:141] libmachine: (addons-495659)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/boot2docker.iso'/>
	I1209 22:32:18.638110   26899 main.go:141] libmachine: (addons-495659)       <target dev='hdc' bus='scsi'/>
	I1209 22:32:18.638122   26899 main.go:141] libmachine: (addons-495659)       <readonly/>
	I1209 22:32:18.638132   26899 main.go:141] libmachine: (addons-495659)     </disk>
	I1209 22:32:18.638143   26899 main.go:141] libmachine: (addons-495659)     <disk type='file' device='disk'>
	I1209 22:32:18.638155   26899 main.go:141] libmachine: (addons-495659)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 22:32:18.638171   26899 main.go:141] libmachine: (addons-495659)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/addons-495659.rawdisk'/>
	I1209 22:32:18.638186   26899 main.go:141] libmachine: (addons-495659)       <target dev='hda' bus='virtio'/>
	I1209 22:32:18.638198   26899 main.go:141] libmachine: (addons-495659)     </disk>
	I1209 22:32:18.638208   26899 main.go:141] libmachine: (addons-495659)     <interface type='network'>
	I1209 22:32:18.638220   26899 main.go:141] libmachine: (addons-495659)       <source network='mk-addons-495659'/>
	I1209 22:32:18.638230   26899 main.go:141] libmachine: (addons-495659)       <model type='virtio'/>
	I1209 22:32:18.638242   26899 main.go:141] libmachine: (addons-495659)     </interface>
	I1209 22:32:18.638257   26899 main.go:141] libmachine: (addons-495659)     <interface type='network'>
	I1209 22:32:18.638269   26899 main.go:141] libmachine: (addons-495659)       <source network='default'/>
	I1209 22:32:18.638279   26899 main.go:141] libmachine: (addons-495659)       <model type='virtio'/>
	I1209 22:32:18.638290   26899 main.go:141] libmachine: (addons-495659)     </interface>
	I1209 22:32:18.638300   26899 main.go:141] libmachine: (addons-495659)     <serial type='pty'>
	I1209 22:32:18.638312   26899 main.go:141] libmachine: (addons-495659)       <target port='0'/>
	I1209 22:32:18.638322   26899 main.go:141] libmachine: (addons-495659)     </serial>
	I1209 22:32:18.638344   26899 main.go:141] libmachine: (addons-495659)     <console type='pty'>
	I1209 22:32:18.638359   26899 main.go:141] libmachine: (addons-495659)       <target type='serial' port='0'/>
	I1209 22:32:18.638366   26899 main.go:141] libmachine: (addons-495659)     </console>
	I1209 22:32:18.638370   26899 main.go:141] libmachine: (addons-495659)     <rng model='virtio'>
	I1209 22:32:18.638380   26899 main.go:141] libmachine: (addons-495659)       <backend model='random'>/dev/random</backend>
	I1209 22:32:18.638386   26899 main.go:141] libmachine: (addons-495659)     </rng>
	I1209 22:32:18.638391   26899 main.go:141] libmachine: (addons-495659)     
	I1209 22:32:18.638398   26899 main.go:141] libmachine: (addons-495659)     
	I1209 22:32:18.638403   26899 main.go:141] libmachine: (addons-495659)   </devices>
	I1209 22:32:18.638409   26899 main.go:141] libmachine: (addons-495659) </domain>
	I1209 22:32:18.638416   26899 main.go:141] libmachine: (addons-495659) 
	I1209 22:32:18.645091   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:03:36:99 in network default
	I1209 22:32:18.645626   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:18.645643   26899 main.go:141] libmachine: (addons-495659) Ensuring networks are active...
	I1209 22:32:18.646336   26899 main.go:141] libmachine: (addons-495659) Ensuring network default is active
	I1209 22:32:18.646660   26899 main.go:141] libmachine: (addons-495659) Ensuring network mk-addons-495659 is active
	I1209 22:32:18.647138   26899 main.go:141] libmachine: (addons-495659) Getting domain xml...
	I1209 22:32:18.647786   26899 main.go:141] libmachine: (addons-495659) Creating domain...
	I1209 22:32:20.055916   26899 main.go:141] libmachine: (addons-495659) Waiting to get IP...
	I1209 22:32:20.056613   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:20.056997   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:20.057036   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:20.056986   26921 retry.go:31] will retry after 218.738592ms: waiting for machine to come up
	I1209 22:32:20.277370   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:20.277796   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:20.277826   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:20.277737   26921 retry.go:31] will retry after 267.521853ms: waiting for machine to come up
	I1209 22:32:20.547141   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:20.547641   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:20.547662   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:20.547600   26921 retry.go:31] will retry after 327.553235ms: waiting for machine to come up
	I1209 22:32:20.876946   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:20.877395   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:20.877434   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:20.877339   26921 retry.go:31] will retry after 499.585414ms: waiting for machine to come up
	I1209 22:32:21.379044   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:21.379460   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:21.379502   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:21.379429   26921 retry.go:31] will retry after 626.096312ms: waiting for machine to come up
	I1209 22:32:22.007279   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:22.007690   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:22.007714   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:22.007659   26921 retry.go:31] will retry after 750.630685ms: waiting for machine to come up
	I1209 22:32:22.759423   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:22.759783   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:22.759816   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:22.759767   26921 retry.go:31] will retry after 1.046969717s: waiting for machine to come up
	I1209 22:32:23.808231   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:23.808619   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:23.808650   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:23.808567   26921 retry.go:31] will retry after 1.386247951s: waiting for machine to come up
	I1209 22:32:25.196568   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:25.196910   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:25.196943   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:25.196852   26921 retry.go:31] will retry after 1.740538424s: waiting for machine to come up
	I1209 22:32:26.939741   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:26.940162   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:26.940194   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:26.940114   26921 retry.go:31] will retry after 1.546303558s: waiting for machine to come up
	I1209 22:32:28.487709   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:28.488106   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:28.488123   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:28.488079   26921 retry.go:31] will retry after 1.978335172s: waiting for machine to come up
	I1209 22:32:30.468778   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:30.469252   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:30.469275   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:30.469181   26921 retry.go:31] will retry after 2.737537028s: waiting for machine to come up
	I1209 22:32:33.208612   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:33.209035   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:33.209067   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:33.208987   26921 retry.go:31] will retry after 3.781811448s: waiting for machine to come up
	I1209 22:32:36.994961   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:36.995294   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:36.995317   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:36.995265   26921 retry.go:31] will retry after 5.000462753s: waiting for machine to come up
	I1209 22:32:42.000269   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.000667   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has current primary IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.000680   26899 main.go:141] libmachine: (addons-495659) Found IP for machine: 192.168.39.123
	I1209 22:32:42.000692   26899 main.go:141] libmachine: (addons-495659) Reserving static IP address...
	I1209 22:32:42.001024   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find host DHCP lease matching {name: "addons-495659", mac: "52:54:00:b0:9d:b8", ip: "192.168.39.123"} in network mk-addons-495659
	I1209 22:32:42.069062   26899 main.go:141] libmachine: (addons-495659) DBG | Getting to WaitForSSH function...
	I1209 22:32:42.069092   26899 main.go:141] libmachine: (addons-495659) Reserved static IP address: 192.168.39.123
	I1209 22:32:42.069182   26899 main.go:141] libmachine: (addons-495659) Waiting for SSH to be available...
	I1209 22:32:42.071385   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.071764   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:42.071795   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.071895   26899 main.go:141] libmachine: (addons-495659) DBG | Using SSH client type: external
	I1209 22:32:42.071932   26899 main.go:141] libmachine: (addons-495659) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa (-rw-------)
	I1209 22:32:42.071970   26899 main.go:141] libmachine: (addons-495659) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 22:32:42.071992   26899 main.go:141] libmachine: (addons-495659) DBG | About to run SSH command:
	I1209 22:32:42.072007   26899 main.go:141] libmachine: (addons-495659) DBG | exit 0
	I1209 22:32:42.199368   26899 main.go:141] libmachine: (addons-495659) DBG | SSH cmd err, output: <nil>: 
	I1209 22:32:42.199666   26899 main.go:141] libmachine: (addons-495659) KVM machine creation complete!
	I1209 22:32:42.199897   26899 main.go:141] libmachine: (addons-495659) Calling .GetConfigRaw
	I1209 22:32:42.200410   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:32:42.200605   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:32:42.200761   26899 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 22:32:42.200777   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:32:42.201938   26899 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 22:32:42.201954   26899 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 22:32:42.201962   26899 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 22:32:42.201967   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:42.203942   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.204246   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:42.204277   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.204440   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:42.204607   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.204734   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.204824   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:42.204986   26899 main.go:141] libmachine: Using SSH client type: native
	I1209 22:32:42.205171   26899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1209 22:32:42.205182   26899 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 22:32:42.314867   26899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:32:42.314891   26899 main.go:141] libmachine: Detecting the provisioner...
	I1209 22:32:42.314906   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:42.317469   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.317790   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:42.317813   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.317940   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:42.318102   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.318276   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.318432   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:42.318559   26899 main.go:141] libmachine: Using SSH client type: native
	I1209 22:32:42.318734   26899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1209 22:32:42.318748   26899 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 22:32:42.427966   26899 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 22:32:42.428036   26899 main.go:141] libmachine: found compatible host: buildroot
	I1209 22:32:42.428048   26899 main.go:141] libmachine: Provisioning with buildroot...
	I1209 22:32:42.428062   26899 main.go:141] libmachine: (addons-495659) Calling .GetMachineName
	I1209 22:32:42.428301   26899 buildroot.go:166] provisioning hostname "addons-495659"
	I1209 22:32:42.428323   26899 main.go:141] libmachine: (addons-495659) Calling .GetMachineName
	I1209 22:32:42.428492   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:42.431267   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.431606   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:42.431634   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.431768   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:42.431939   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.432096   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.432211   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:42.432362   26899 main.go:141] libmachine: Using SSH client type: native
	I1209 22:32:42.432521   26899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1209 22:32:42.432533   26899 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-495659 && echo "addons-495659" | sudo tee /etc/hostname
	I1209 22:32:42.556903   26899 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-495659
	
	I1209 22:32:42.556931   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:42.559508   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.559960   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:42.559982   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.560153   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:42.560336   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.560466   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.560572   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:42.560740   26899 main.go:141] libmachine: Using SSH client type: native
	I1209 22:32:42.561003   26899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1209 22:32:42.561022   26899 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-495659' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-495659/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-495659' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 22:32:42.680108   26899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:32:42.680138   26899 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 22:32:42.680162   26899 buildroot.go:174] setting up certificates
	I1209 22:32:42.680173   26899 provision.go:84] configureAuth start
	I1209 22:32:42.680182   26899 main.go:141] libmachine: (addons-495659) Calling .GetMachineName
	I1209 22:32:42.680410   26899 main.go:141] libmachine: (addons-495659) Calling .GetIP
	I1209 22:32:42.682903   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.683152   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:42.683185   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.683334   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:42.685213   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.685510   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:42.685533   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.685668   26899 provision.go:143] copyHostCerts
	I1209 22:32:42.685741   26899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 22:32:42.685861   26899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 22:32:42.685948   26899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 22:32:42.686050   26899 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.addons-495659 san=[127.0.0.1 192.168.39.123 addons-495659 localhost minikube]
	I1209 22:32:42.747855   26899 provision.go:177] copyRemoteCerts
	I1209 22:32:42.747936   26899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 22:32:42.747967   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:42.750538   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.750840   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:42.750866   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.751057   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:42.751228   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.751385   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:42.751539   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:32:42.838865   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 22:32:42.863546   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 22:32:42.886766   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 22:32:42.909892   26899 provision.go:87] duration metric: took 229.705138ms to configureAuth
	I1209 22:32:42.909928   26899 buildroot.go:189] setting minikube options for container-runtime
	I1209 22:32:42.910102   26899 config.go:182] Loaded profile config "addons-495659": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:32:42.910167   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:42.912959   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.913354   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:42.913387   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.913489   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:42.913660   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.913812   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.913937   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:42.914109   26899 main.go:141] libmachine: Using SSH client type: native
	I1209 22:32:42.914313   26899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1209 22:32:42.914328   26899 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 22:32:43.490681   26899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 22:32:43.490712   26899 main.go:141] libmachine: Checking connection to Docker...
	I1209 22:32:43.490720   26899 main.go:141] libmachine: (addons-495659) Calling .GetURL
	I1209 22:32:43.491968   26899 main.go:141] libmachine: (addons-495659) DBG | Using libvirt version 6000000
	I1209 22:32:43.494426   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.494757   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:43.494788   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.494971   26899 main.go:141] libmachine: Docker is up and running!
	I1209 22:32:43.494987   26899 main.go:141] libmachine: Reticulating splines...
	I1209 22:32:43.494994   26899 client.go:171] duration metric: took 25.656022047s to LocalClient.Create
	I1209 22:32:43.495018   26899 start.go:167] duration metric: took 25.656080767s to libmachine.API.Create "addons-495659"
	I1209 22:32:43.495029   26899 start.go:293] postStartSetup for "addons-495659" (driver="kvm2")
	I1209 22:32:43.495039   26899 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 22:32:43.495056   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:32:43.495313   26899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 22:32:43.495343   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:43.497691   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.498030   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:43.498058   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.498145   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:43.498352   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:43.498491   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:43.498617   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:32:43.581675   26899 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 22:32:43.585759   26899 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 22:32:43.585787   26899 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 22:32:43.585875   26899 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 22:32:43.585915   26899 start.go:296] duration metric: took 90.879017ms for postStartSetup
	I1209 22:32:43.585959   26899 main.go:141] libmachine: (addons-495659) Calling .GetConfigRaw
	I1209 22:32:43.586578   26899 main.go:141] libmachine: (addons-495659) Calling .GetIP
	I1209 22:32:43.588956   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.589265   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:43.589299   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.589574   26899 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/config.json ...
	I1209 22:32:43.589744   26899 start.go:128] duration metric: took 25.769621738s to createHost
	I1209 22:32:43.589765   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:43.591744   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.591994   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:43.592026   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.592155   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:43.592302   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:43.592435   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:43.592546   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:43.592662   26899 main.go:141] libmachine: Using SSH client type: native
	I1209 22:32:43.592797   26899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1209 22:32:43.592807   26899 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 22:32:43.700054   26899 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733783563.677358996
	
	I1209 22:32:43.700081   26899 fix.go:216] guest clock: 1733783563.677358996
	I1209 22:32:43.700092   26899 fix.go:229] Guest: 2024-12-09 22:32:43.677358996 +0000 UTC Remote: 2024-12-09 22:32:43.589755063 +0000 UTC m=+25.873246812 (delta=87.603933ms)
	I1209 22:32:43.700117   26899 fix.go:200] guest clock delta is within tolerance: 87.603933ms
	I1209 22:32:43.700123   26899 start.go:83] releasing machines lock for "addons-495659", held for 25.88007683s
	I1209 22:32:43.700140   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:32:43.700420   26899 main.go:141] libmachine: (addons-495659) Calling .GetIP
	I1209 22:32:43.703900   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.704299   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:43.704339   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.704541   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:32:43.705010   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:32:43.705202   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:32:43.705294   26899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 22:32:43.705342   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:43.705406   26899 ssh_runner.go:195] Run: cat /version.json
	I1209 22:32:43.705429   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:43.708029   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.708077   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.708458   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:43.708496   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.708526   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:43.708544   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.708611   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:43.708740   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:43.708812   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:43.708861   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:43.708905   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:43.708988   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:32:43.709293   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:43.711714   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:32:43.810569   26899 ssh_runner.go:195] Run: systemctl --version
	I1209 22:32:43.816391   26899 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 22:32:43.967820   26899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 22:32:43.974148   26899 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 22:32:43.974211   26899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 22:32:43.990164   26899 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 22:32:43.990195   26899 start.go:495] detecting cgroup driver to use...
	I1209 22:32:43.990275   26899 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 22:32:44.006521   26899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 22:32:44.020521   26899 docker.go:217] disabling cri-docker service (if available) ...
	I1209 22:32:44.020569   26899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 22:32:44.033780   26899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 22:32:44.047298   26899 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 22:32:44.173171   26899 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 22:32:44.323153   26899 docker.go:233] disabling docker service ...
	I1209 22:32:44.323221   26899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 22:32:44.336858   26899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 22:32:44.348812   26899 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 22:32:44.477437   26899 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 22:32:44.603757   26899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 22:32:44.617546   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 22:32:44.635588   26899 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 22:32:44.635644   26899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:32:44.645750   26899 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 22:32:44.645816   26899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:32:44.656358   26899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:32:44.666635   26899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:32:44.677009   26899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 22:32:44.687553   26899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:32:44.698069   26899 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:32:44.715646   26899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:32:44.726196   26899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 22:32:44.736653   26899 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 22:32:44.736714   26899 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 22:32:44.749747   26899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 22:32:44.759915   26899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:32:44.875504   26899 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 22:32:44.962725   26899 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 22:32:44.962819   26899 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 22:32:44.967010   26899 start.go:563] Will wait 60s for crictl version
	I1209 22:32:44.967079   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:32:44.970473   26899 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 22:32:45.005670   26899 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 22:32:45.005799   26899 ssh_runner.go:195] Run: crio --version
	I1209 22:32:45.031425   26899 ssh_runner.go:195] Run: crio --version
	I1209 22:32:45.061238   26899 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 22:32:45.062528   26899 main.go:141] libmachine: (addons-495659) Calling .GetIP
	I1209 22:32:45.065253   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:45.065585   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:45.065612   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:45.065840   26899 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 22:32:45.069902   26899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:32:45.081994   26899 kubeadm.go:883] updating cluster {Name:addons-495659 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-495659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 22:32:45.082095   26899 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:32:45.082134   26899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 22:32:45.112172   26899 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 22:32:45.112237   26899 ssh_runner.go:195] Run: which lz4
	I1209 22:32:45.116006   26899 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 22:32:45.119953   26899 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 22:32:45.119989   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 22:32:46.427220   26899 crio.go:462] duration metric: took 1.311237852s to copy over tarball
	I1209 22:32:46.427299   26899 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 22:32:48.532533   26899 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.105197283s)
	I1209 22:32:48.532567   26899 crio.go:469] duration metric: took 2.105318924s to extract the tarball
	I1209 22:32:48.532577   26899 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 22:32:48.568777   26899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 22:32:48.609560   26899 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 22:32:48.609585   26899 cache_images.go:84] Images are preloaded, skipping loading
	I1209 22:32:48.609593   26899 kubeadm.go:934] updating node { 192.168.39.123 8443 v1.31.2 crio true true} ...
	I1209 22:32:48.609680   26899 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-495659 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-495659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 22:32:48.609741   26899 ssh_runner.go:195] Run: crio config
	I1209 22:32:48.653920   26899 cni.go:84] Creating CNI manager for ""
	I1209 22:32:48.653941   26899 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 22:32:48.653950   26899 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 22:32:48.653971   26899 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-495659 NodeName:addons-495659 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 22:32:48.654093   26899 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-495659"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.123"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.123"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 22:32:48.654149   26899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 22:32:48.663495   26899 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 22:32:48.663553   26899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 22:32:48.672464   26899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1209 22:32:48.688371   26899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 22:32:48.703824   26899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1209 22:32:48.719253   26899 ssh_runner.go:195] Run: grep 192.168.39.123	control-plane.minikube.internal$ /etc/hosts
	I1209 22:32:48.722742   26899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:32:48.734207   26899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:32:48.854440   26899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:32:48.870198   26899 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659 for IP: 192.168.39.123
	I1209 22:32:48.870219   26899 certs.go:194] generating shared ca certs ...
	I1209 22:32:48.870234   26899 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:48.870374   26899 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 22:32:49.026650   26899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt ...
	I1209 22:32:49.026677   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt: {Name:mk4aa8b3303014e859b905619dc713a14f47f0e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.026883   26899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key ...
	I1209 22:32:49.026899   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key: {Name:mkbe7959b01763460b891869efeaaa7c0b172380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.027002   26899 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 22:32:49.241735   26899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt ...
	I1209 22:32:49.241762   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt: {Name:mk3e1969c35e6866f4a16c819226d1b93c596515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.241936   26899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key ...
	I1209 22:32:49.241952   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key: {Name:mkd0e37343ceab4470419b978e0c2bf516f2ce3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.242048   26899 certs.go:256] generating profile certs ...
	I1209 22:32:49.242103   26899 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.key
	I1209 22:32:49.242121   26899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt with IP's: []
	I1209 22:32:49.315927   26899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt ...
	I1209 22:32:49.315955   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: {Name:mk6648f9e363648de05d75c6d2e1f1f684328858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.316136   26899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.key ...
	I1209 22:32:49.316150   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.key: {Name:mkca584ccbeb9a0c57dc763d42d17d4460c01326 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.316250   26899 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.key.9923c7c7
	I1209 22:32:49.316270   26899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.crt.9923c7c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.123]
	I1209 22:32:49.550866   26899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.crt.9923c7c7 ...
	I1209 22:32:49.550900   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.crt.9923c7c7: {Name:mkd530a8155bdf0b4e134bca78d52b27af7b4494 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.551072   26899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.key.9923c7c7 ...
	I1209 22:32:49.551086   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.key.9923c7c7: {Name:mkedf1f3966ad4142cff91543bde56963440fb0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.551158   26899 certs.go:381] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.crt.9923c7c7 -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.crt
	I1209 22:32:49.551228   26899 certs.go:385] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.key.9923c7c7 -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.key
	I1209 22:32:49.551279   26899 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/proxy-client.key
	I1209 22:32:49.551296   26899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/proxy-client.crt with IP's: []
	I1209 22:32:49.628687   26899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/proxy-client.crt ...
	I1209 22:32:49.628720   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/proxy-client.crt: {Name:mkcb41c20ca8b63d7740fdda3154dd5f0e5349bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.628878   26899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/proxy-client.key ...
	I1209 22:32:49.628890   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/proxy-client.key: {Name:mkb437074ff7e34ee4861308f77de157585ee72b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.629071   26899 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 22:32:49.629110   26899 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 22:32:49.629138   26899 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 22:32:49.629166   26899 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 22:32:49.629720   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 22:32:49.665502   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 22:32:49.694439   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 22:32:49.717759   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 22:32:49.741036   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 22:32:49.763881   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 22:32:49.787209   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 22:32:49.814492   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 22:32:49.837585   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 22:32:49.859783   26899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 22:32:49.875336   26899 ssh_runner.go:195] Run: openssl version
	I1209 22:32:49.880977   26899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 22:32:49.891205   26899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:32:49.895458   26899 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:32:49.895538   26899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:32:49.900955   26899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 22:32:49.911164   26899 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 22:32:49.914973   26899 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 22:32:49.915018   26899 kubeadm.go:392] StartCluster: {Name:addons-495659 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-495659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:32:49.915082   26899 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 22:32:49.915125   26899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 22:32:49.948885   26899 cri.go:89] found id: ""
	I1209 22:32:49.948949   26899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 22:32:49.959467   26899 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 22:32:49.969824   26899 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 22:32:49.981144   26899 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 22:32:49.981168   26899 kubeadm.go:157] found existing configuration files:
	
	I1209 22:32:49.981217   26899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 22:32:49.991130   26899 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 22:32:49.991201   26899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 22:32:50.001381   26899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 22:32:50.010909   26899 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 22:32:50.010985   26899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 22:32:50.021248   26899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 22:32:50.031081   26899 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 22:32:50.031142   26899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 22:32:50.041284   26899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 22:32:50.050323   26899 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 22:32:50.050399   26899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 22:32:50.059701   26899 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 22:32:50.106288   26899 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 22:32:50.106523   26899 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 22:32:50.208955   26899 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 22:32:50.209097   26899 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 22:32:50.209231   26899 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 22:32:50.217240   26899 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 22:32:50.452949   26899 out.go:235]   - Generating certificates and keys ...
	I1209 22:32:50.453092   26899 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 22:32:50.453173   26899 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 22:32:50.453273   26899 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 22:32:50.504259   26899 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 22:32:50.595277   26899 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 22:32:50.745501   26899 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 22:32:50.802392   26899 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 22:32:50.802686   26899 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-495659 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
	I1209 22:32:50.912616   26899 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 22:32:50.912842   26899 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-495659 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
	I1209 22:32:51.353200   26899 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 22:32:51.713825   26899 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 22:32:51.876601   26899 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 22:32:51.876718   26899 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 22:32:52.036728   26899 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 22:32:52.259193   26899 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 22:32:52.501461   26899 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 22:32:52.572835   26899 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 22:32:52.806351   26899 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 22:32:52.806852   26899 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 22:32:52.809192   26899 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 22:32:52.810736   26899 out.go:235]   - Booting up control plane ...
	I1209 22:32:52.810824   26899 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 22:32:52.810919   26899 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 22:32:52.811044   26899 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 22:32:52.826015   26899 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 22:32:52.831761   26899 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 22:32:52.831832   26899 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 22:32:52.953360   26899 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 22:32:52.953454   26899 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 22:32:53.454951   26899 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.80586ms
	I1209 22:32:53.455044   26899 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 22:32:58.454879   26899 kubeadm.go:310] [api-check] The API server is healthy after 5.001388205s
	I1209 22:32:58.465168   26899 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 22:32:58.483322   26899 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 22:32:58.506203   26899 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 22:32:58.506452   26899 kubeadm.go:310] [mark-control-plane] Marking the node addons-495659 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 22:32:58.519808   26899 kubeadm.go:310] [bootstrap-token] Using token: bnekio.9szq1yutbnib956w
	I1209 22:32:58.521388   26899 out.go:235]   - Configuring RBAC rules ...
	I1209 22:32:58.521521   26899 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 22:32:58.526140   26899 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 22:32:58.533502   26899 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 22:32:58.537218   26899 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 22:32:58.545907   26899 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 22:32:58.549552   26899 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 22:32:58.863723   26899 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 22:32:59.284238   26899 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 22:32:59.863681   26899 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 22:32:59.864583   26899 kubeadm.go:310] 
	I1209 22:32:59.864640   26899 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 22:32:59.864650   26899 kubeadm.go:310] 
	I1209 22:32:59.864742   26899 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 22:32:59.864753   26899 kubeadm.go:310] 
	I1209 22:32:59.864774   26899 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 22:32:59.864841   26899 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 22:32:59.864909   26899 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 22:32:59.864917   26899 kubeadm.go:310] 
	I1209 22:32:59.864982   26899 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 22:32:59.864992   26899 kubeadm.go:310] 
	I1209 22:32:59.865068   26899 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 22:32:59.865078   26899 kubeadm.go:310] 
	I1209 22:32:59.865152   26899 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 22:32:59.865291   26899 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 22:32:59.865391   26899 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 22:32:59.865401   26899 kubeadm.go:310] 
	I1209 22:32:59.865494   26899 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 22:32:59.865626   26899 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 22:32:59.865637   26899 kubeadm.go:310] 
	I1209 22:32:59.865707   26899 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bnekio.9szq1yutbnib956w \
	I1209 22:32:59.865853   26899 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b \
	I1209 22:32:59.865912   26899 kubeadm.go:310] 	--control-plane 
	I1209 22:32:59.865932   26899 kubeadm.go:310] 
	I1209 22:32:59.866054   26899 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 22:32:59.866069   26899 kubeadm.go:310] 
	I1209 22:32:59.866179   26899 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bnekio.9szq1yutbnib956w \
	I1209 22:32:59.866350   26899 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b 
	I1209 22:32:59.866921   26899 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 22:32:59.866953   26899 cni.go:84] Creating CNI manager for ""
	I1209 22:32:59.866964   26899 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 22:32:59.869233   26899 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 22:32:59.870462   26899 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 22:32:59.882769   26899 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 22:32:59.900692   26899 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 22:32:59.900753   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:32:59.900817   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-495659 minikube.k8s.io/updated_at=2024_12_09T22_32_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=addons-495659 minikube.k8s.io/primary=true
	I1209 22:32:59.929367   26899 ops.go:34] apiserver oom_adj: -16
	I1209 22:33:00.011646   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:33:00.512198   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:33:01.011613   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:33:01.512404   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:33:02.011668   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:33:02.512318   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:33:03.011603   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:33:03.512124   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:33:04.012635   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:33:04.111456   26899 kubeadm.go:1113] duration metric: took 4.210764385s to wait for elevateKubeSystemPrivileges
	I1209 22:33:04.111492   26899 kubeadm.go:394] duration metric: took 14.196477075s to StartCluster
	I1209 22:33:04.111509   26899 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:33:04.111660   26899 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:33:04.112032   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:33:04.112218   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 22:33:04.112244   26899 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:33:04.112297   26899 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1209 22:33:04.112387   26899 addons.go:69] Setting yakd=true in profile "addons-495659"
	I1209 22:33:04.112406   26899 addons.go:234] Setting addon yakd=true in "addons-495659"
	I1209 22:33:04.112441   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.112455   26899 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-495659"
	I1209 22:33:04.112466   26899 config.go:182] Loaded profile config "addons-495659": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:33:04.112480   26899 addons.go:69] Setting cloud-spanner=true in profile "addons-495659"
	I1209 22:33:04.112491   26899 addons.go:234] Setting addon cloud-spanner=true in "addons-495659"
	I1209 22:33:04.112471   26899 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-495659"
	I1209 22:33:04.112526   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.112558   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.112614   26899 addons.go:69] Setting storage-provisioner=true in profile "addons-495659"
	I1209 22:33:04.112635   26899 addons.go:234] Setting addon storage-provisioner=true in "addons-495659"
	I1209 22:33:04.112663   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.112726   26899 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-495659"
	I1209 22:33:04.112747   26899 addons.go:69] Setting registry=true in profile "addons-495659"
	I1209 22:33:04.112803   26899 addons.go:69] Setting gcp-auth=true in profile "addons-495659"
	I1209 22:33:04.112807   26899 addons.go:234] Setting addon registry=true in "addons-495659"
	I1209 22:33:04.112834   26899 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-495659"
	I1209 22:33:04.112881   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.112911   26899 addons.go:69] Setting ingress=true in profile "addons-495659"
	I1209 22:33:04.112917   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.112929   26899 addons.go:69] Setting ingress-dns=true in profile "addons-495659"
	I1209 22:33:04.112759   26899 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-495659"
	I1209 22:33:04.112941   26899 addons.go:234] Setting addon ingress-dns=true in "addons-495659"
	I1209 22:33:04.112954   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.112956   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.112970   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.112972   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.113005   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.112791   26899 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-495659"
	I1209 22:33:04.113025   26899 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-495659"
	I1209 22:33:04.112826   26899 mustload.go:65] Loading cluster: addons-495659
	I1209 22:33:04.112768   26899 addons.go:69] Setting metrics-server=true in profile "addons-495659"
	I1209 22:33:04.113038   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.113042   26899 addons.go:234] Setting addon metrics-server=true in "addons-495659"
	I1209 22:33:04.112894   26899 addons.go:69] Setting default-storageclass=true in profile "addons-495659"
	I1209 22:33:04.113055   26899 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-495659"
	I1209 22:33:04.112896   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.112782   26899 addons.go:69] Setting volcano=true in profile "addons-495659"
	I1209 22:33:04.113076   26899 addons.go:234] Setting addon volcano=true in "addons-495659"
	I1209 22:33:04.113093   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.113100   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.112429   26899 addons.go:69] Setting inspektor-gadget=true in profile "addons-495659"
	I1209 22:33:04.113142   26899 addons.go:234] Setting addon inspektor-gadget=true in "addons-495659"
	I1209 22:33:04.113171   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.112906   26899 addons.go:69] Setting volumesnapshots=true in profile "addons-495659"
	I1209 22:33:04.113294   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.113299   26899 addons.go:234] Setting addon volumesnapshots=true in "addons-495659"
	I1209 22:33:04.112883   26899 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-495659"
	I1209 22:33:04.112930   26899 addons.go:234] Setting addon ingress=true in "addons-495659"
	I1209 22:33:04.113321   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.113340   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.113390   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.113058   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.113447   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.113474   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.113511   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.113538   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.113604   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.113705   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.113769   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.113791   26899 config.go:182] Loaded profile config "addons-495659": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:33:04.113799   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.113963   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.113994   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.114044   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.114074   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.114277   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.114312   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.113804   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.114694   26899 out.go:177] * Verifying Kubernetes components...
	I1209 22:33:04.113606   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.114820   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.113814   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.116079   26899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:33:04.131975   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.132025   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.135503   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.135530   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41703
	I1209 22:33:04.135554   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.135514   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34895
	I1209 22:33:04.135872   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.135906   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.136153   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.136285   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.136742   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.136762   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.136745   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.136816   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.136892   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36641
	I1209 22:33:04.137130   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.137180   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.137737   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.137774   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.137994   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38731
	I1209 22:33:04.138161   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I1209 22:33:04.138323   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.138360   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.138491   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.138631   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.139039   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.139060   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.139331   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.139348   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.139369   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.139439   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.139893   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.139910   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.139958   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.140001   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.140434   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.140957   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.140989   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.146754   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36983
	I1209 22:33:04.148201   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.148854   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.148909   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.164241   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.164882   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.164902   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.165253   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.165787   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.165824   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.174183   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41079
	I1209 22:33:04.174871   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.175546   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.175582   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.175967   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.176141   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.176973   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42827
	I1209 22:33:04.177508   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.178447   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.178472   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.179014   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.179378   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.180325   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44739
	I1209 22:33:04.181735   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.182206   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.182224   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.182645   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.183249   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.183294   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.183637   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37525
	I1209 22:33:04.184047   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43131
	I1209 22:33:04.184283   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.184399   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.185105   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.185128   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.185447   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.185650   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.185765   26899 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-495659"
	I1209 22:33:04.185808   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.186477   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
	I1209 22:33:04.187148   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.187148   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.187636   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.187654   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.187857   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.188011   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.188050   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.188229   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.188260   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.188502   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
	I1209 22:33:04.189020   26899 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1209 22:33:04.189277   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I1209 22:33:04.189473   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.189980   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.190003   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.190311   26899 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 22:33:04.190331   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1209 22:33:04.190349   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.190371   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.190408   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.190423   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46119
	I1209 22:33:04.191031   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.191073   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.191120   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.191132   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.191193   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.191216   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.191629   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.191674   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.191772   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.192189   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.192226   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.192755   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.192925   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.192962   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.194735   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36847
	I1209 22:33:04.194837   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.194852   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.195270   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.195455   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.195958   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.196536   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.196554   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.196812   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.197009   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.197255   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.197430   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.197798   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.198607   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.199058   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.199943   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.200392   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39737
	I1209 22:33:04.201159   26899 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1209 22:33:04.201187   26899 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1209 22:33:04.201267   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.202889   26899 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1209 22:33:04.202986   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I1209 22:33:04.202998   26899 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 22:33:04.203014   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1209 22:33:04.203038   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.203195   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.203218   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.203317   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34459
	I1209 22:33:04.203321   26899 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1209 22:33:04.203421   26899 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1209 22:33:04.203440   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.204000   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.204094   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.204237   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.204256   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.204598   26899 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1209 22:33:04.204654   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1209 22:33:04.204749   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.204796   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.204809   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.205296   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.205339   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.204724   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.205455   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.205531   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.205966   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.206478   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.206514   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.207100   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.207120   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.208070   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.208148   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.208726   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.208764   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.208948   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.209507   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.209526   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.209937   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.210104   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.210300   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.210363   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.210384   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.211399   26899 addons.go:234] Setting addon default-storageclass=true in "addons-495659"
	I1209 22:33:04.211433   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.211797   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.211825   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.211908   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.212248   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.212385   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.212480   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.212580   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.213912   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.214746   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.214779   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.214962   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.215103   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.215246   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.215371   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.216221   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37409
	I1209 22:33:04.216592   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.217711   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.217733   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.218088   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.218236   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.221537   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44295
	I1209 22:33:04.222395   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.222782   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.223654   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.223671   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.224053   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.224394   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.224643   26899 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1209 22:33:04.226059   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.226267   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:04.226280   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:04.227168   26899 out.go:177]   - Using image docker.io/registry:2.8.3
	I1209 22:33:04.228170   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:04.228172   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:04.228189   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:04.228198   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:04.228204   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:04.228393   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:04.228420   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:04.228430   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	W1209 22:33:04.228505   26899 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1209 22:33:04.228834   26899 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1209 22:33:04.228871   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1209 22:33:04.228893   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.232579   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.232942   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.232968   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.233298   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.233500   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.233676   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.233801   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.237599   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I1209 22:33:04.238016   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39417
	I1209 22:33:04.240028   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38453
	I1209 22:33:04.240056   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.240121   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.240657   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.240677   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.240806   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.240820   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.241234   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.241255   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.241445   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.241479   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35709
	I1209 22:33:04.241448   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.241964   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46381
	I1209 22:33:04.242520   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.242605   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.243128   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.243384   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.243397   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.243480   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.243486   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.243716   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.243986   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.244041   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.244130   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.244142   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.244423   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.244494   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.244537   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.245515   26899 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 22:33:04.245810   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.245840   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.245521   26899 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1209 22:33:04.246030   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.246229   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39941
	I1209 22:33:04.247505   26899 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1209 22:33:04.247538   26899 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1209 22:33:04.247584   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.248209   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.248325   26899 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 22:33:04.249674   26899 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1209 22:33:04.250850   26899 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1209 22:33:04.250859   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41491
	I1209 22:33:04.250922   26899 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1209 22:33:04.250941   26899 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1209 22:33:04.250974   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.251425   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.252043   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.252060   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.252442   26899 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 22:33:04.252457   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1209 22:33:04.252470   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.252473   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.252652   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.252708   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.253237   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.253256   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.253612   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.253675   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34309
	I1209 22:33:04.253911   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.254234   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.254253   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.254734   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.254750   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.254797   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.255416   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.255609   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.256022   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.256170   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.256848   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.256865   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.259683   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.259694   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.259698   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.259706   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I1209 22:33:04.259699   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.259748   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.259762   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.259776   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.259781   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.259799   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.259819   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46429
	I1209 22:33:04.259893   26899 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1209 22:33:04.260155   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.260230   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.260291   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.260447   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.260516   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.260534   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.260569   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.260668   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.260695   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.260707   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.260727   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.260946   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.260963   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.261039   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.261287   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.261308   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.261481   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.261627   26899 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 22:33:04.261637   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1209 22:33:04.261647   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.261808   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.261848   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.263242   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.263774   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.264771   26899 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 22:33:04.265412   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.265477   26899 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1209 22:33:04.265849   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.265963   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.266131   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.266294   26899 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 22:33:04.266319   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 22:33:04.266322   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.266334   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.266460   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.266589   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.267059   26899 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 22:33:04.267076   26899 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 22:33:04.267087   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.270114   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.270598   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.270916   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.270937   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.271000   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.271014   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.271090   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.271190   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.271240   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.271327   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.271375   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.271586   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.271624   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.271765   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	W1209 22:33:04.272678   26899 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44612->192.168.39.123:22: read: connection reset by peer
	I1209 22:33:04.272725   26899 retry.go:31] will retry after 219.84003ms: ssh: handshake failed: read tcp 192.168.39.1:44612->192.168.39.123:22: read: connection reset by peer
	I1209 22:33:04.276933   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I1209 22:33:04.277275   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.277755   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.277773   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.278110   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.278278   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.279962   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.282348   26899 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1209 22:33:04.283712   26899 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1209 22:33:04.284153   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45011
	I1209 22:33:04.284460   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
	I1209 22:33:04.284679   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.284855   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.285126   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.285144   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.285423   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.285626   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.285644   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.285663   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.285974   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.286179   26899 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1209 22:33:04.286198   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.287790   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.287955   26899 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 22:33:04.287966   26899 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 22:33:04.287978   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.288197   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.288631   26899 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1209 22:33:04.289494   26899 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1209 22:33:04.290626   26899 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1209 22:33:04.290843   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.291186   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.291201   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.291328   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.291421   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.291493   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.291608   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.291796   26899 out.go:177]   - Using image docker.io/busybox:stable
	I1209 22:33:04.292829   26899 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1209 22:33:04.292934   26899 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 22:33:04.292951   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1209 22:33:04.292968   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.294978   26899 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1209 22:33:04.295621   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.295922   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.295948   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.296107   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.296260   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.296392   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.296528   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.297138   26899 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1209 22:33:04.298420   26899 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1209 22:33:04.298440   26899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1209 22:33:04.298461   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.301275   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.301756   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.301790   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.301805   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.301975   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.302100   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.302221   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	W1209 22:33:04.302820   26899 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44632->192.168.39.123:22: read: connection reset by peer
	I1209 22:33:04.302841   26899 retry.go:31] will retry after 165.6651ms: ssh: handshake failed: read tcp 192.168.39.1:44632->192.168.39.123:22: read: connection reset by peer
	I1209 22:33:04.632575   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 22:33:04.635372   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 22:33:04.653637   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1209 22:33:04.666894   26899 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1209 22:33:04.666921   26899 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1209 22:33:04.683757   26899 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1209 22:33:04.683786   26899 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1209 22:33:04.688876   26899 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1209 22:33:04.688906   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1209 22:33:04.695018   26899 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1209 22:33:04.695044   26899 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1209 22:33:04.723045   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 22:33:04.752361   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 22:33:04.773276   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 22:33:04.774559   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 22:33:04.778464   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 22:33:04.783264   26899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:33:04.783623   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 22:33:04.825093   26899 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1209 22:33:04.825124   26899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1209 22:33:04.856705   26899 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 22:33:04.856730   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1209 22:33:04.888729   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1209 22:33:04.907800   26899 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1209 22:33:04.907827   26899 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1209 22:33:04.920498   26899 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1209 22:33:04.920520   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1209 22:33:04.933902   26899 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1209 22:33:04.933923   26899 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1209 22:33:05.020320   26899 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1209 22:33:05.020346   26899 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1209 22:33:05.083200   26899 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 22:33:05.083232   26899 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 22:33:05.088406   26899 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1209 22:33:05.088431   26899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1209 22:33:05.160766   26899 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1209 22:33:05.160793   26899 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1209 22:33:05.208195   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1209 22:33:05.222279   26899 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1209 22:33:05.222305   26899 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1209 22:33:05.313711   26899 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 22:33:05.313738   26899 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 22:33:05.346115   26899 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1209 22:33:05.346138   26899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1209 22:33:05.434997   26899 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1209 22:33:05.435018   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1209 22:33:05.500964   26899 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 22:33:05.500992   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1209 22:33:05.556713   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 22:33:05.596976   26899 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1209 22:33:05.596998   26899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1209 22:33:05.646393   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 22:33:05.653778   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1209 22:33:05.873628   26899 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1209 22:33:05.873660   26899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1209 22:33:06.257997   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.625381642s)
	I1209 22:33:06.258067   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:06.258079   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:06.258451   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:06.258477   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:06.258492   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:06.258503   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:06.258515   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:06.258779   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:06.258808   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:06.258782   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:06.274721   26899 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1209 22:33:06.274748   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1209 22:33:06.520818   26899 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1209 22:33:06.520850   26899 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1209 22:33:07.026991   26899 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1209 22:33:07.027021   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1209 22:33:07.130042   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.494633829s)
	I1209 22:33:07.130102   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:07.130114   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:07.130417   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:07.130433   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:07.130442   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:07.130449   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:07.130685   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:07.130699   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:07.468994   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.815316492s)
	I1209 22:33:07.469043   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.745962887s)
	I1209 22:33:07.469048   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:07.469081   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:07.469133   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:07.469211   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:07.469527   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:07.469540   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:07.469529   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:07.469557   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:07.469548   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:07.469578   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:07.469579   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:07.469586   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:07.469641   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:07.469768   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:07.469777   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:07.469838   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:07.469846   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:07.554211   26899 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1209 22:33:07.554243   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1209 22:33:07.862725   26899 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 22:33:07.862760   26899 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1209 22:33:08.173532   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 22:33:11.263328   26899 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1209 22:33:11.263367   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:11.266303   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:11.266710   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:11.266740   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:11.266894   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:11.267093   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:11.267248   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:11.267396   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:11.597008   26899 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1209 22:33:11.805212   26899 addons.go:234] Setting addon gcp-auth=true in "addons-495659"
	I1209 22:33:11.805268   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:11.805621   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:11.805702   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:11.821217   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39515
	I1209 22:33:11.821739   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:11.822214   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:11.822234   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:11.822533   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:11.823038   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:11.823075   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:11.838084   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46635
	I1209 22:33:11.838576   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:11.839110   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:11.839129   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:11.839483   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:11.839699   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:11.841181   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:11.841407   26899 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1209 22:33:11.841432   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:11.843959   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:11.844352   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:11.844384   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:11.844511   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:11.844666   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:11.844806   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:11.844917   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:12.227715   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.47531758s)
	I1209 22:33:12.227771   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.227783   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.227816   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.454495457s)
	I1209 22:33:12.227863   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.227875   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.227884   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.453299791s)
	I1209 22:33:12.227915   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.227929   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.227985   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.449496234s)
	I1209 22:33:12.228002   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.228010   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.228012   26899 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.444717406s)
	I1209 22:33:12.228033   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.228046   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.228054   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.228061   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.228072   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.228082   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.228092   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.228099   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.228171   26899 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.444504237s)
	I1209 22:33:12.228193   26899 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1209 22:33:12.228396   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.228432   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.228439   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.228449   26899 addons.go:475] Verifying addon ingress=true in "addons-495659"
	I1209 22:33:12.228669   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.228685   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.228694   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.228702   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.229073   26899 node_ready.go:35] waiting up to 6m0s for node "addons-495659" to be "Ready" ...
	I1209 22:33:12.229259   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.229285   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.229292   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.229428   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.229450   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.229457   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.230155   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.230170   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.230177   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.230184   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.230347   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.341589189s)
	I1209 22:33:12.230372   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.230380   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.230415   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.230434   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.230437   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.230507   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.673768887s)
	I1209 22:33:12.230533   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.230544   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.230665   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.230676   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.230680   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.584253582s)
	W1209 22:33:12.230711   26899 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 22:33:12.230749   26899 retry.go:31] will retry after 202.79381ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 22:33:12.230684   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.230775   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.230809   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.577004101s)
	I1209 22:33:12.230435   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.022206507s)
	I1209 22:33:12.230829   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.230838   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.230848   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.230840   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.231250   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.231271   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.231295   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.231301   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.231308   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.231313   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.231364   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.231387   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.231392   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.231590   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.231599   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.231607   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.231614   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.232507   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.232544   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.232551   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.232629   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.232654   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.232686   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.232691   26899 out.go:177] * Verifying ingress addon...
	I1209 22:33:12.232817   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.232832   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.232840   26899 addons.go:475] Verifying addon registry=true in "addons-495659"
	I1209 22:33:12.232693   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.234887   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.234898   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.235233   26899 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-495659 service yakd-dashboard -n yakd-dashboard
	
	I1209 22:33:12.235682   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.235691   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.235703   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.235718   26899 addons.go:475] Verifying addon metrics-server=true in "addons-495659"
	I1209 22:33:12.236051   26899 out.go:177] * Verifying registry addon...
	I1209 22:33:12.236127   26899 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1209 22:33:12.238228   26899 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1209 22:33:12.241606   26899 node_ready.go:49] node "addons-495659" has status "Ready":"True"
	I1209 22:33:12.241631   26899 node_ready.go:38] duration metric: took 12.536019ms for node "addons-495659" to be "Ready" ...
	I1209 22:33:12.241642   26899 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:33:12.268765   26899 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 22:33:12.268793   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:12.268879   26899 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1209 22:33:12.268905   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:12.269982   26899 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-k9c92" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:12.298583   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.298609   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.298888   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.298923   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.298948   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	W1209 22:33:12.299036   26899 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1209 22:33:12.315582   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.315608   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.316014   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.316025   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.316042   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.434234   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 22:33:12.732924   26899 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-495659" context rescaled to 1 replicas
	I1209 22:33:12.740496   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:12.742534   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:13.249889   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:13.249905   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:13.750357   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:13.750427   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:14.536392   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:14.536479   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:14.564081   26899 pod_ready.go:103] pod "amd-gpu-device-plugin-k9c92" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:14.747552   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.573812865s)
	I1209 22:33:14.747632   26899 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.906201867s)
	I1209 22:33:14.747631   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:14.747791   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:14.748043   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:14.748058   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:14.748068   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:14.748075   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:14.748284   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:14.748299   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:14.748310   26899 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-495659"
	I1209 22:33:14.749358   26899 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 22:33:14.750097   26899 out.go:177] * Verifying csi-hostpath-driver addon...
	I1209 22:33:14.751807   26899 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1209 22:33:14.752514   26899 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1209 22:33:14.752845   26899 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1209 22:33:14.752858   26899 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1209 22:33:14.773230   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:14.773373   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:14.782063   26899 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 22:33:14.782093   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:14.942684   26899 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1209 22:33:14.942705   26899 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1209 22:33:15.001281   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.566997309s)
	I1209 22:33:15.001348   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:15.001363   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:15.001687   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:15.001736   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:15.001751   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:15.001760   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:15.001713   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:15.001971   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:15.002029   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:15.002049   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:15.072529   26899 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 22:33:15.072555   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1209 22:33:15.181724   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 22:33:15.243240   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:15.243818   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:15.342647   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:15.751688   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:15.751740   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:15.759993   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:16.277133   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:16.279599   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.09784483s)
	I1209 22:33:16.279641   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:16.279653   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:16.279928   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:16.279944   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:16.279955   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:16.279962   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:16.280313   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:16.280315   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:16.280334   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:16.281840   26899 addons.go:475] Verifying addon gcp-auth=true in "addons-495659"
	I1209 22:33:16.283310   26899 out.go:177] * Verifying gcp-auth addon...
	I1209 22:33:16.284973   26899 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1209 22:33:16.307731   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:16.308257   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:16.330721   26899 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1209 22:33:16.330741   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:16.741657   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:16.742732   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:16.758055   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:16.781843   26899 pod_ready.go:103] pod "amd-gpu-device-plugin-k9c92" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:16.790249   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:17.240512   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:17.241997   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:17.256178   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:17.288877   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:17.741649   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:17.741984   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:17.757886   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:17.788008   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:18.241912   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:18.243339   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:18.257069   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:18.288322   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:18.945494   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:18.946440   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:18.946554   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:18.946729   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:18.952843   26899 pod_ready.go:103] pod "amd-gpu-device-plugin-k9c92" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:19.240982   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:19.245501   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:19.257288   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:19.287628   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:19.740955   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:19.742827   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:19.757301   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:19.788915   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:20.245857   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:20.246387   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:20.256912   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:20.287920   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:20.740768   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:20.742329   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:20.756488   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:20.788457   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:21.240769   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:21.241799   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:21.259060   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:21.275369   26899 pod_ready.go:103] pod "amd-gpu-device-plugin-k9c92" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:21.287453   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:21.740534   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:21.742082   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:21.756484   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:21.789057   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:22.240395   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:22.242614   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:22.257163   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:22.288721   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:22.741778   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:22.742802   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:22.757860   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:22.788419   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:23.240957   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:23.242278   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:23.256746   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:23.276246   26899 pod_ready.go:93] pod "amd-gpu-device-plugin-k9c92" in "kube-system" namespace has status "Ready":"True"
	I1209 22:33:23.276267   26899 pod_ready.go:82] duration metric: took 11.006264843s for pod "amd-gpu-device-plugin-k9c92" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.276277   26899 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-665x9" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.277715   26899 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-665x9" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-665x9" not found
	I1209 22:33:23.277731   26899 pod_ready.go:82] duration metric: took 1.448647ms for pod "coredns-7c65d6cfc9-665x9" in "kube-system" namespace to be "Ready" ...
	E1209 22:33:23.277739   26899 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-665x9" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-665x9" not found
	I1209 22:33:23.277746   26899 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-d7jm7" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.282626   26899 pod_ready.go:93] pod "coredns-7c65d6cfc9-d7jm7" in "kube-system" namespace has status "Ready":"True"
	I1209 22:33:23.282659   26899 pod_ready.go:82] duration metric: took 4.904458ms for pod "coredns-7c65d6cfc9-d7jm7" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.282672   26899 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-495659" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.287277   26899 pod_ready.go:93] pod "etcd-addons-495659" in "kube-system" namespace has status "Ready":"True"
	I1209 22:33:23.287306   26899 pod_ready.go:82] duration metric: took 4.625929ms for pod "etcd-addons-495659" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.287318   26899 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-495659" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.288086   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:23.291396   26899 pod_ready.go:93] pod "kube-apiserver-addons-495659" in "kube-system" namespace has status "Ready":"True"
	I1209 22:33:23.291413   26899 pod_ready.go:82] duration metric: took 4.085678ms for pod "kube-apiserver-addons-495659" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.291421   26899 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-495659" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.474926   26899 pod_ready.go:93] pod "kube-controller-manager-addons-495659" in "kube-system" namespace has status "Ready":"True"
	I1209 22:33:23.474950   26899 pod_ready.go:82] duration metric: took 183.522974ms for pod "kube-controller-manager-addons-495659" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.474962   26899 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x6vmt" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.742380   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:23.743152   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:23.756321   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:23.789695   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:23.874625   26899 pod_ready.go:93] pod "kube-proxy-x6vmt" in "kube-system" namespace has status "Ready":"True"
	I1209 22:33:23.874655   26899 pod_ready.go:82] duration metric: took 399.68642ms for pod "kube-proxy-x6vmt" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.874669   26899 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-495659" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:24.243738   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:24.243860   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:24.257020   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:24.274626   26899 pod_ready.go:93] pod "kube-scheduler-addons-495659" in "kube-system" namespace has status "Ready":"True"
	I1209 22:33:24.274650   26899 pod_ready.go:82] duration metric: took 399.973086ms for pod "kube-scheduler-addons-495659" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:24.274663   26899 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:24.288805   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:24.742685   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:24.743180   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:24.756732   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:24.788272   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:25.241598   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:25.242455   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:25.256908   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:25.288774   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:25.741277   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:25.743267   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:25.757201   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:25.789714   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:26.241602   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:26.243604   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:26.257076   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:26.282211   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:26.288164   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:26.741439   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:26.741753   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:26.757103   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:26.788170   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:27.463878   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:27.464272   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:27.465030   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:27.465800   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:27.741461   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:27.742329   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:27.756917   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:27.789227   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:28.240918   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:28.242263   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:28.256808   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:28.288525   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:28.742038   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:28.742784   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:28.757263   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:28.781361   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:28.788085   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:29.241669   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:29.242816   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:29.256558   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:29.288236   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:29.741661   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:29.742516   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:29.757233   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:29.787552   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:30.241288   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:30.243626   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:30.258436   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:30.291738   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:30.741021   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:30.742260   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:30.757669   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:30.781703   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:30.788529   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:31.240451   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:31.241609   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:31.256868   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:31.287638   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:32.159039   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:32.160960   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:32.161294   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:32.161677   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:32.240516   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:32.242838   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:32.258040   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:32.288627   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:32.746804   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:32.748595   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:32.756110   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:32.788907   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:33.242590   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:33.243167   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:33.256890   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:33.280663   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:33.288308   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:33.741740   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:33.745461   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:33.765110   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:33.788541   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:34.284714   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:34.284869   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:34.288190   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:34.289933   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:34.741227   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:34.741367   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:34.757801   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:34.791240   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:35.241343   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:35.242281   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:35.256974   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:35.281699   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:35.288600   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:35.741395   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:35.741940   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:35.756504   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:35.787416   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:36.240169   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:36.242836   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:36.256869   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:36.288362   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:36.741453   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:36.742553   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:36.756855   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:36.789101   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:37.241993   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:37.242583   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:37.256940   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:37.288243   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:37.740711   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:37.742262   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:37.756374   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:37.780728   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:37.787628   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:38.241299   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:38.242408   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:38.256763   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:38.288858   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:38.742515   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:38.742857   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:38.757206   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:38.787590   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:39.241672   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:39.242580   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:39.257041   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:39.341725   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:39.741241   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:39.742269   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:39.757000   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:39.788354   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:40.242331   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:40.242524   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:40.257363   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:40.280607   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:40.287838   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:40.741765   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:40.743166   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:40.756837   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:40.788269   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:41.241194   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:41.242859   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:41.256045   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:41.287753   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:41.740935   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:41.742236   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:41.756391   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:41.790421   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:42.241268   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:42.242733   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:42.257044   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:42.281843   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:42.288785   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:42.741803   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:42.743505   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:42.756908   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:42.788401   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:43.240936   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:43.242621   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:43.257312   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:43.288654   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:43.742575   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:43.842831   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:43.843147   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:43.843178   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:44.240970   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:44.242669   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:44.255879   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:44.288450   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:44.741097   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:44.742376   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:44.756609   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:44.779833   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:44.788389   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:45.240251   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:45.242133   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:45.256358   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:45.288245   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:45.744190   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:45.744310   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:45.757213   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:45.788445   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:46.241342   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:46.241739   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:46.257495   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:46.287746   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:46.740904   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:46.742227   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:46.757165   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:46.781079   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:46.789270   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:47.242086   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:47.242461   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:47.257065   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:47.288547   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:47.741156   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:47.743390   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:47.756449   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:47.788147   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:48.242176   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:48.242225   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:48.257521   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:48.288536   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:48.743423   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:48.743800   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:48.758103   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:48.782046   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:48.788475   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:49.241804   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:49.242616   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:49.257604   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:49.290202   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:49.741029   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:49.742185   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:49.756502   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:49.787990   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:50.241337   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:50.242545   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:50.257018   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:50.287558   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:50.761534   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:50.761562   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:50.762302   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:50.787454   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:51.242415   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:51.242490   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:51.256880   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:51.281932   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:51.289030   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:51.740931   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:51.741510   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:51.756459   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:51.787590   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:52.241731   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:52.242226   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:52.256565   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:52.288296   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:52.741783   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:52.742457   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:52.756661   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:52.787907   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:53.239989   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:53.242141   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:53.256795   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:53.289095   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:53.741557   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:53.742709   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:53.757979   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:53.782006   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:53.789029   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:54.240582   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:54.241726   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:54.256408   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:54.287791   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:54.741436   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:54.741641   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:54.757082   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:54.788060   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:55.241887   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:55.242002   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:55.257422   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:55.287766   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:55.740673   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:55.742171   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:55.756150   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:55.788874   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:56.240880   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:56.242268   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:56.257200   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:56.281512   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:56.287607   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:56.741981   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:56.742109   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:56.756843   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:57.184605   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:57.242757   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:57.243895   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:57.259892   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:57.288282   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:57.740825   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:57.742374   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:57.757128   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:57.793411   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:58.241919   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:58.243342   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:58.257124   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:58.281628   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:58.342541   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:58.740898   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:58.742488   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:58.762260   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:58.795716   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:59.241746   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:59.242094   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:59.256547   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:59.289221   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:59.742238   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:59.742568   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:59.757568   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:59.787899   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:00.240269   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:00.241838   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:34:00.258101   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:00.287894   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:00.741499   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:00.743514   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:34:00.757918   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:00.781180   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:00.787612   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:01.240466   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:01.242276   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:34:01.256229   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:01.287846   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:01.749020   26899 kapi.go:107] duration metric: took 49.510785984s to wait for kubernetes.io/minikube-addons=registry ...
	I1209 22:34:01.750985   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:01.758204   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:01.790226   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:02.241461   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:02.257539   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:02.287835   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:02.741055   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:02.757456   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:02.788542   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:03.241050   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:03.257148   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:03.288092   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:03.288754   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:03.741154   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:03.757296   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:03.788096   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:04.241806   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:04.257518   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:04.288081   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:04.741198   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:04.757752   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:04.788111   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:05.242186   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:05.257266   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:05.341268   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:05.749219   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:05.764318   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:05.788736   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:05.789798   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:06.240888   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:06.256891   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:06.287732   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:06.741676   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:06.757100   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:06.788606   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:07.244336   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:07.258612   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:07.289584   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:07.740382   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:07.756464   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:07.787854   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:08.241619   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:08.256531   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:08.280791   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:08.288681   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:08.750115   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:08.765303   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:08.805207   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:09.241267   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:09.673842   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:09.675446   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:09.777854   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:09.778173   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:09.884909   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:10.242684   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:10.258022   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:10.284348   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:10.288460   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:10.740337   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:10.757895   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:10.788328   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:11.241575   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:11.257073   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:11.341754   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:11.740932   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:11.756609   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:11.788352   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:12.240469   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:12.258520   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:12.289583   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:12.741259   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:12.756810   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:12.780297   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:12.788132   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:13.245214   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:13.258787   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:13.287663   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:13.741547   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:13.757387   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:13.788531   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:14.240917   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:14.257714   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:14.288060   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:14.744167   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:14.757430   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:14.782535   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:14.789235   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:15.243320   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:15.257520   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:15.290068   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:15.741177   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:15.757335   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:15.788364   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:16.240227   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:16.256731   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:16.288037   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:16.741932   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:16.757558   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:16.788801   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:17.242442   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:17.256427   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:17.281337   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:17.288630   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:17.741555   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:17.760122   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:17.787979   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:18.242685   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:18.260742   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:18.289032   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:18.746433   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:18.762757   26899 kapi.go:107] duration metric: took 1m4.010238048s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1209 22:34:18.789140   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:19.241266   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:19.282796   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:19.289253   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:19.742722   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:19.788226   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:20.241520   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:20.287653   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:20.745036   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:20.788511   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:21.240359   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:21.288361   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:21.740870   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:21.780739   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:21.788541   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:22.240230   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:22.288559   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:22.740961   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:22.788790   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:23.241361   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:23.585963   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:23.742905   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:23.781070   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:23.788790   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:24.241187   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:24.288888   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:24.742154   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:24.787929   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:25.242465   26899 kapi.go:107] duration metric: took 1m13.00633547s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1209 22:34:25.288678   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:25.785054   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:25.788777   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:26.291783   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:26.788130   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:27.288796   26899 kapi.go:107] duration metric: took 1m11.003819422s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1209 22:34:27.290719   26899 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-495659 cluster.
	I1209 22:34:27.292052   26899 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1209 22:34:27.293527   26899 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1209 22:34:27.294944   26899 out.go:177] * Enabled addons: amd-gpu-device-plugin, ingress-dns, nvidia-device-plugin, cloud-spanner, storage-provisioner, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1209 22:34:27.296221   26899 addons.go:510] duration metric: took 1m23.183925972s for enable addons: enabled=[amd-gpu-device-plugin ingress-dns nvidia-device-plugin cloud-spanner storage-provisioner inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1209 22:34:28.281187   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:30.282803   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:32.780777   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:34.781024   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:36.782226   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:39.280201   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:41.286389   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:43.781192   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:46.281106   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:48.781341   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:49.779815   26899 pod_ready.go:93] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"True"
	I1209 22:34:49.779839   26899 pod_ready.go:82] duration metric: took 1m25.505168562s for pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace to be "Ready" ...
	I1209 22:34:49.779848   26899 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-wbphv" in "kube-system" namespace to be "Ready" ...
	I1209 22:34:49.783784   26899 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-wbphv" in "kube-system" namespace has status "Ready":"True"
	I1209 22:34:49.783801   26899 pod_ready.go:82] duration metric: took 3.946164ms for pod "nvidia-device-plugin-daemonset-wbphv" in "kube-system" namespace to be "Ready" ...
	I1209 22:34:49.783863   26899 pod_ready.go:39] duration metric: took 1m37.542205547s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:34:49.783888   26899 api_server.go:52] waiting for apiserver process to appear ...
	I1209 22:34:49.783914   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 22:34:49.783970   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 22:34:49.825921   26899 cri.go:89] found id: "0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72"
	I1209 22:34:49.825945   26899 cri.go:89] found id: ""
	I1209 22:34:49.825953   26899 logs.go:282] 1 containers: [0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72]
	I1209 22:34:49.825996   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:34:49.829776   26899 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 22:34:49.829829   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 22:34:49.870376   26899 cri.go:89] found id: "4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c"
	I1209 22:34:49.870394   26899 cri.go:89] found id: ""
	I1209 22:34:49.870401   26899 logs.go:282] 1 containers: [4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c]
	I1209 22:34:49.870446   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:34:49.874556   26899 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 22:34:49.874606   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 22:34:49.914512   26899 cri.go:89] found id: "0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51"
	I1209 22:34:49.914539   26899 cri.go:89] found id: ""
	I1209 22:34:49.914545   26899 logs.go:282] 1 containers: [0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51]
	I1209 22:34:49.914590   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:34:49.918790   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 22:34:49.918836   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 22:34:49.955423   26899 cri.go:89] found id: "69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb"
	I1209 22:34:49.955441   26899 cri.go:89] found id: ""
	I1209 22:34:49.955448   26899 logs.go:282] 1 containers: [69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb]
	I1209 22:34:49.955499   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:34:49.959129   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 22:34:49.959178   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 22:34:49.997890   26899 cri.go:89] found id: "03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea"
	I1209 22:34:49.997919   26899 cri.go:89] found id: ""
	I1209 22:34:49.997926   26899 logs.go:282] 1 containers: [03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea]
	I1209 22:34:49.997971   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:34:50.001647   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 22:34:50.001700   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 22:34:50.044946   26899 cri.go:89] found id: "3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9"
	I1209 22:34:50.044968   26899 cri.go:89] found id: ""
	I1209 22:34:50.044975   26899 logs.go:282] 1 containers: [3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9]
	I1209 22:34:50.045018   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:34:50.049033   26899 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 22:34:50.049085   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 22:34:50.087997   26899 cri.go:89] found id: ""
	I1209 22:34:50.088020   26899 logs.go:282] 0 containers: []
	W1209 22:34:50.088027   26899 logs.go:284] No container was found matching "kindnet"
	I1209 22:34:50.088036   26899 logs.go:123] Gathering logs for kubelet ...
	I1209 22:34:50.088047   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 22:34:50.145753   26899 logs.go:138] Found kubelet problem: Dec 09 22:33:16 addons-495659 kubelet[1207]: W1209 22:33:16.248349    1207 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-495659" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-495659' and this object
	W1209 22:34:50.145946   26899 logs.go:138] Found kubelet problem: Dec 09 22:33:16 addons-495659 kubelet[1207]: E1209 22:33:16.248388    1207 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-495659\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-495659' and this object" logger="UnhandledError"
	I1209 22:34:50.167423   26899 logs.go:123] Gathering logs for describe nodes ...
	I1209 22:34:50.167450   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 22:34:50.425246   26899 logs.go:123] Gathering logs for kube-scheduler [69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb] ...
	I1209 22:34:50.425272   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb"
	I1209 22:34:50.467699   26899 logs.go:123] Gathering logs for CRI-O ...
	I1209 22:34:50.467729   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 22:34:51.534755   26899 logs.go:123] Gathering logs for dmesg ...
	I1209 22:34:51.534797   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 22:34:51.549092   26899 logs.go:123] Gathering logs for kube-apiserver [0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72] ...
	I1209 22:34:51.549130   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72"
	I1209 22:34:51.597413   26899 logs.go:123] Gathering logs for etcd [4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c] ...
	I1209 22:34:51.597443   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c"
	I1209 22:34:51.658321   26899 logs.go:123] Gathering logs for coredns [0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51] ...
	I1209 22:34:51.658366   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51"
	I1209 22:34:51.696408   26899 logs.go:123] Gathering logs for kube-proxy [03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea] ...
	I1209 22:34:51.696444   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea"
	I1209 22:34:51.733330   26899 logs.go:123] Gathering logs for kube-controller-manager [3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9] ...
	I1209 22:34:51.733358   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9"
	I1209 22:34:51.799961   26899 logs.go:123] Gathering logs for container status ...
	I1209 22:34:51.800000   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 22:34:51.849597   26899 out.go:358] Setting ErrFile to fd 2...
	I1209 22:34:51.849623   26899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 22:34:51.849675   26899 out.go:270] X Problems detected in kubelet:
	W1209 22:34:51.849690   26899 out.go:270]   Dec 09 22:33:16 addons-495659 kubelet[1207]: W1209 22:33:16.248349    1207 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-495659" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-495659' and this object
	W1209 22:34:51.849706   26899 out.go:270]   Dec 09 22:33:16 addons-495659 kubelet[1207]: E1209 22:33:16.248388    1207 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-495659\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-495659' and this object" logger="UnhandledError"
	I1209 22:34:51.849715   26899 out.go:358] Setting ErrFile to fd 2...
	I1209 22:34:51.849723   26899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:35:01.851299   26899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 22:35:01.872018   26899 api_server.go:72] duration metric: took 1m57.759733019s to wait for apiserver process to appear ...
	I1209 22:35:01.872046   26899 api_server.go:88] waiting for apiserver healthz status ...
	I1209 22:35:01.872083   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 22:35:01.872130   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 22:35:01.922822   26899 cri.go:89] found id: "0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72"
	I1209 22:35:01.922847   26899 cri.go:89] found id: ""
	I1209 22:35:01.922857   26899 logs.go:282] 1 containers: [0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72]
	I1209 22:35:01.922913   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:01.927123   26899 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 22:35:01.927179   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 22:35:01.974561   26899 cri.go:89] found id: "4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c"
	I1209 22:35:01.974582   26899 cri.go:89] found id: ""
	I1209 22:35:01.974591   26899 logs.go:282] 1 containers: [4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c]
	I1209 22:35:01.974655   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:01.978685   26899 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 22:35:01.978743   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 22:35:02.018657   26899 cri.go:89] found id: "0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51"
	I1209 22:35:02.018680   26899 cri.go:89] found id: ""
	I1209 22:35:02.018687   26899 logs.go:282] 1 containers: [0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51]
	I1209 22:35:02.018730   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:02.022840   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 22:35:02.022898   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 22:35:02.071243   26899 cri.go:89] found id: "69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb"
	I1209 22:35:02.071271   26899 cri.go:89] found id: ""
	I1209 22:35:02.071279   26899 logs.go:282] 1 containers: [69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb]
	I1209 22:35:02.071330   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:02.076515   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 22:35:02.076584   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 22:35:02.119449   26899 cri.go:89] found id: "03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea"
	I1209 22:35:02.119480   26899 cri.go:89] found id: ""
	I1209 22:35:02.119491   26899 logs.go:282] 1 containers: [03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea]
	I1209 22:35:02.119555   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:02.123723   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 22:35:02.123801   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 22:35:02.169926   26899 cri.go:89] found id: "3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9"
	I1209 22:35:02.169957   26899 cri.go:89] found id: ""
	I1209 22:35:02.169967   26899 logs.go:282] 1 containers: [3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9]
	I1209 22:35:02.170024   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:02.174412   26899 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 22:35:02.174486   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 22:35:02.212103   26899 cri.go:89] found id: ""
	I1209 22:35:02.212136   26899 logs.go:282] 0 containers: []
	W1209 22:35:02.212150   26899 logs.go:284] No container was found matching "kindnet"
	I1209 22:35:02.212162   26899 logs.go:123] Gathering logs for container status ...
	I1209 22:35:02.212177   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 22:35:02.261653   26899 logs.go:123] Gathering logs for kubelet ...
	I1209 22:35:02.261685   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 22:35:02.321962   26899 logs.go:138] Found kubelet problem: Dec 09 22:33:16 addons-495659 kubelet[1207]: W1209 22:33:16.248349    1207 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-495659" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-495659' and this object
	W1209 22:35:02.322137   26899 logs.go:138] Found kubelet problem: Dec 09 22:33:16 addons-495659 kubelet[1207]: E1209 22:33:16.248388    1207 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-495659\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-495659' and this object" logger="UnhandledError"
	I1209 22:35:02.346178   26899 logs.go:123] Gathering logs for describe nodes ...
	I1209 22:35:02.346214   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 22:35:02.469641   26899 logs.go:123] Gathering logs for coredns [0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51] ...
	I1209 22:35:02.469671   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51"
	I1209 22:35:02.510796   26899 logs.go:123] Gathering logs for kube-proxy [03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea] ...
	I1209 22:35:02.510825   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea"
	I1209 22:35:02.548664   26899 logs.go:123] Gathering logs for kube-controller-manager [3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9] ...
	I1209 22:35:02.548693   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9"
	I1209 22:35:02.612895   26899 logs.go:123] Gathering logs for CRI-O ...
	I1209 22:35:02.612934   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 22:35:03.580580   26899 logs.go:123] Gathering logs for dmesg ...
	I1209 22:35:03.580629   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 22:35:03.594934   26899 logs.go:123] Gathering logs for kube-apiserver [0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72] ...
	I1209 22:35:03.594968   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72"
	I1209 22:35:03.638381   26899 logs.go:123] Gathering logs for etcd [4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c] ...
	I1209 22:35:03.638418   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c"
	I1209 22:35:03.716464   26899 logs.go:123] Gathering logs for kube-scheduler [69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb] ...
	I1209 22:35:03.716501   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb"
	I1209 22:35:03.762891   26899 out.go:358] Setting ErrFile to fd 2...
	I1209 22:35:03.762920   26899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 22:35:03.762988   26899 out.go:270] X Problems detected in kubelet:
	W1209 22:35:03.763002   26899 out.go:270]   Dec 09 22:33:16 addons-495659 kubelet[1207]: W1209 22:33:16.248349    1207 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-495659" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-495659' and this object
	W1209 22:35:03.763013   26899 out.go:270]   Dec 09 22:33:16 addons-495659 kubelet[1207]: E1209 22:33:16.248388    1207 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-495659\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-495659' and this object" logger="UnhandledError"
	I1209 22:35:03.763028   26899 out.go:358] Setting ErrFile to fd 2...
	I1209 22:35:03.763040   26899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:35:13.764149   26899 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8443/healthz ...
	I1209 22:35:13.768720   26899 api_server.go:279] https://192.168.39.123:8443/healthz returned 200:
	ok
	I1209 22:35:13.769825   26899 api_server.go:141] control plane version: v1.31.2
	I1209 22:35:13.769854   26899 api_server.go:131] duration metric: took 11.897797249s to wait for apiserver health ...
	I1209 22:35:13.769864   26899 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 22:35:13.769888   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 22:35:13.769980   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 22:35:13.807265   26899 cri.go:89] found id: "0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72"
	I1209 22:35:13.807294   26899 cri.go:89] found id: ""
	I1209 22:35:13.807305   26899 logs.go:282] 1 containers: [0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72]
	I1209 22:35:13.807369   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:13.811313   26899 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 22:35:13.811377   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 22:35:13.854925   26899 cri.go:89] found id: "4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c"
	I1209 22:35:13.854952   26899 cri.go:89] found id: ""
	I1209 22:35:13.854960   26899 logs.go:282] 1 containers: [4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c]
	I1209 22:35:13.855006   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:13.860037   26899 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 22:35:13.860086   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 22:35:13.903000   26899 cri.go:89] found id: "0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51"
	I1209 22:35:13.903021   26899 cri.go:89] found id: ""
	I1209 22:35:13.903028   26899 logs.go:282] 1 containers: [0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51]
	I1209 22:35:13.903072   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:13.908353   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 22:35:13.908407   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 22:35:13.944140   26899 cri.go:89] found id: "69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb"
	I1209 22:35:13.944162   26899 cri.go:89] found id: ""
	I1209 22:35:13.944172   26899 logs.go:282] 1 containers: [69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb]
	I1209 22:35:13.944223   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:13.948012   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 22:35:13.948070   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 22:35:13.983935   26899 cri.go:89] found id: "03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea"
	I1209 22:35:13.983954   26899 cri.go:89] found id: ""
	I1209 22:35:13.983961   26899 logs.go:282] 1 containers: [03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea]
	I1209 22:35:13.984001   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:13.988147   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 22:35:13.988205   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 22:35:14.033548   26899 cri.go:89] found id: "3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9"
	I1209 22:35:14.033571   26899 cri.go:89] found id: ""
	I1209 22:35:14.033582   26899 logs.go:282] 1 containers: [3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9]
	I1209 22:35:14.033641   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:14.037633   26899 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 22:35:14.037699   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 22:35:14.074181   26899 cri.go:89] found id: ""
	I1209 22:35:14.074203   26899 logs.go:282] 0 containers: []
	W1209 22:35:14.074214   26899 logs.go:284] No container was found matching "kindnet"
	I1209 22:35:14.074224   26899 logs.go:123] Gathering logs for kube-controller-manager [3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9] ...
	I1209 22:35:14.074238   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9"
	I1209 22:35:14.131215   26899 logs.go:123] Gathering logs for CRI-O ...
	I1209 22:35:14.131247   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 22:35:15.055958   26899 logs.go:123] Gathering logs for kubelet ...
	I1209 22:35:15.056004   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 22:35:15.117052   26899 logs.go:138] Found kubelet problem: Dec 09 22:33:16 addons-495659 kubelet[1207]: W1209 22:33:16.248349    1207 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-495659" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-495659' and this object
	W1209 22:35:15.117238   26899 logs.go:138] Found kubelet problem: Dec 09 22:33:16 addons-495659 kubelet[1207]: E1209 22:33:16.248388    1207 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-495659\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-495659' and this object" logger="UnhandledError"
	I1209 22:35:15.139988   26899 logs.go:123] Gathering logs for describe nodes ...
	I1209 22:35:15.140012   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 22:35:15.274654   26899 logs.go:123] Gathering logs for kube-apiserver [0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72] ...
	I1209 22:35:15.274699   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72"
	I1209 22:35:15.341489   26899 logs.go:123] Gathering logs for etcd [4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c] ...
	I1209 22:35:15.341535   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c"
	I1209 22:35:15.399237   26899 logs.go:123] Gathering logs for kube-proxy [03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea] ...
	I1209 22:35:15.399268   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea"
	I1209 22:35:15.448277   26899 logs.go:123] Gathering logs for dmesg ...
	I1209 22:35:15.448311   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 22:35:15.463322   26899 logs.go:123] Gathering logs for coredns [0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51] ...
	I1209 22:35:15.463357   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51"
	I1209 22:35:15.512136   26899 logs.go:123] Gathering logs for kube-scheduler [69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb] ...
	I1209 22:35:15.512165   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb"
	I1209 22:35:15.564177   26899 logs.go:123] Gathering logs for container status ...
	I1209 22:35:15.564216   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 22:35:15.618057   26899 out.go:358] Setting ErrFile to fd 2...
	I1209 22:35:15.618081   26899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 22:35:15.618130   26899 out.go:270] X Problems detected in kubelet:
	W1209 22:35:15.618140   26899 out.go:270]   Dec 09 22:33:16 addons-495659 kubelet[1207]: W1209 22:33:16.248349    1207 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-495659" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-495659' and this object
	W1209 22:35:15.618151   26899 out.go:270]   Dec 09 22:33:16 addons-495659 kubelet[1207]: E1209 22:33:16.248388    1207 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-495659\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-495659' and this object" logger="UnhandledError"
	I1209 22:35:15.618157   26899 out.go:358] Setting ErrFile to fd 2...
	I1209 22:35:15.618162   26899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:35:25.627925   26899 system_pods.go:59] 18 kube-system pods found
	I1209 22:35:25.627953   26899 system_pods.go:61] "amd-gpu-device-plugin-k9c92" [0ae134a6-d82f-4b75-adef-ebd11156ef7e] Running
	I1209 22:35:25.627958   26899 system_pods.go:61] "coredns-7c65d6cfc9-d7jm7" [d8dad938-bb60-4879-907c-12003e131d8e] Running
	I1209 22:35:25.627962   26899 system_pods.go:61] "csi-hostpath-attacher-0" [9df0b766-98a8-45e9-a41a-b2d57a6f0b69] Running
	I1209 22:35:25.627966   26899 system_pods.go:61] "csi-hostpath-resizer-0" [1b9c7557-95a8-4767-8ae9-5765b9249de1] Running
	I1209 22:35:25.627969   26899 system_pods.go:61] "csi-hostpathplugin-g2mgw" [9d710134-71c5-4a26-86cd-f58e421e155c] Running
	I1209 22:35:25.627973   26899 system_pods.go:61] "etcd-addons-495659" [ad9e1594-8b6b-4f6b-a2b2-ba6c27608281] Running
	I1209 22:35:25.627977   26899 system_pods.go:61] "kube-apiserver-addons-495659" [8e8b50f7-6b12-436e-8373-822f3a7dce46] Running
	I1209 22:35:25.627981   26899 system_pods.go:61] "kube-controller-manager-addons-495659" [050e1ad7-dfe2-4dfd-aade-ba853c720d25] Running
	I1209 22:35:25.627985   26899 system_pods.go:61] "kube-ingress-dns-minikube" [2bccaa8d-e874-466c-96e6-476f10eab5b5] Running
	I1209 22:35:25.627988   26899 system_pods.go:61] "kube-proxy-x6vmt" [f74e8d2a-5b4f-4e61-8783-167e45a70839] Running
	I1209 22:35:25.627992   26899 system_pods.go:61] "kube-scheduler-addons-495659" [7dfad718-626c-4238-8c31-891a41614578] Running
	I1209 22:35:25.627996   26899 system_pods.go:61] "metrics-server-84c5f94fbc-drvs4" [697234f5-8b91-4bd8-9d7a-681c7fd5c8b3] Running
	I1209 22:35:25.628002   26899 system_pods.go:61] "nvidia-device-plugin-daemonset-wbphv" [373a99a7-1c49-427a-931d-f6d3bcb7cc29] Running
	I1209 22:35:25.628010   26899 system_pods.go:61] "registry-5cc95cd69-m98x5" [ecb1f96a-9905-45be-b670-6791c5067c07] Running
	I1209 22:35:25.628015   26899 system_pods.go:61] "registry-proxy-xqgz7" [8103c584-faf4-4900-8fda-b5367b887c19] Running
	I1209 22:35:25.628020   26899 system_pods.go:61] "snapshot-controller-56fcc65765-b5gd5" [96a1edd3-1afc-4328-804d-8e1a4b5c0655] Running
	I1209 22:35:25.628028   26899 system_pods.go:61] "snapshot-controller-56fcc65765-pz724" [8ef3c979-b020-4950-835f-4960308d5a38] Running
	I1209 22:35:25.628033   26899 system_pods.go:61] "storage-provisioner" [1c9a6458-b9f3-47d5-af12-07b1a97dbcdd] Running
	I1209 22:35:25.628044   26899 system_pods.go:74] duration metric: took 11.858172377s to wait for pod list to return data ...
	I1209 22:35:25.628058   26899 default_sa.go:34] waiting for default service account to be created ...
	I1209 22:35:25.630469   26899 default_sa.go:45] found service account: "default"
	I1209 22:35:25.630489   26899 default_sa.go:55] duration metric: took 2.422445ms for default service account to be created ...
	I1209 22:35:25.630497   26899 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 22:35:25.639968   26899 system_pods.go:86] 18 kube-system pods found
	I1209 22:35:25.639995   26899 system_pods.go:89] "amd-gpu-device-plugin-k9c92" [0ae134a6-d82f-4b75-adef-ebd11156ef7e] Running
	I1209 22:35:25.640003   26899 system_pods.go:89] "coredns-7c65d6cfc9-d7jm7" [d8dad938-bb60-4879-907c-12003e131d8e] Running
	I1209 22:35:25.640008   26899 system_pods.go:89] "csi-hostpath-attacher-0" [9df0b766-98a8-45e9-a41a-b2d57a6f0b69] Running
	I1209 22:35:25.640015   26899 system_pods.go:89] "csi-hostpath-resizer-0" [1b9c7557-95a8-4767-8ae9-5765b9249de1] Running
	I1209 22:35:25.640021   26899 system_pods.go:89] "csi-hostpathplugin-g2mgw" [9d710134-71c5-4a26-86cd-f58e421e155c] Running
	I1209 22:35:25.640030   26899 system_pods.go:89] "etcd-addons-495659" [ad9e1594-8b6b-4f6b-a2b2-ba6c27608281] Running
	I1209 22:35:25.640036   26899 system_pods.go:89] "kube-apiserver-addons-495659" [8e8b50f7-6b12-436e-8373-822f3a7dce46] Running
	I1209 22:35:25.640044   26899 system_pods.go:89] "kube-controller-manager-addons-495659" [050e1ad7-dfe2-4dfd-aade-ba853c720d25] Running
	I1209 22:35:25.640050   26899 system_pods.go:89] "kube-ingress-dns-minikube" [2bccaa8d-e874-466c-96e6-476f10eab5b5] Running
	I1209 22:35:25.640060   26899 system_pods.go:89] "kube-proxy-x6vmt" [f74e8d2a-5b4f-4e61-8783-167e45a70839] Running
	I1209 22:35:25.640066   26899 system_pods.go:89] "kube-scheduler-addons-495659" [7dfad718-626c-4238-8c31-891a41614578] Running
	I1209 22:35:25.640072   26899 system_pods.go:89] "metrics-server-84c5f94fbc-drvs4" [697234f5-8b91-4bd8-9d7a-681c7fd5c8b3] Running
	I1209 22:35:25.640080   26899 system_pods.go:89] "nvidia-device-plugin-daemonset-wbphv" [373a99a7-1c49-427a-931d-f6d3bcb7cc29] Running
	I1209 22:35:25.640084   26899 system_pods.go:89] "registry-5cc95cd69-m98x5" [ecb1f96a-9905-45be-b670-6791c5067c07] Running
	I1209 22:35:25.640087   26899 system_pods.go:89] "registry-proxy-xqgz7" [8103c584-faf4-4900-8fda-b5367b887c19] Running
	I1209 22:35:25.640094   26899 system_pods.go:89] "snapshot-controller-56fcc65765-b5gd5" [96a1edd3-1afc-4328-804d-8e1a4b5c0655] Running
	I1209 22:35:25.640097   26899 system_pods.go:89] "snapshot-controller-56fcc65765-pz724" [8ef3c979-b020-4950-835f-4960308d5a38] Running
	I1209 22:35:25.640100   26899 system_pods.go:89] "storage-provisioner" [1c9a6458-b9f3-47d5-af12-07b1a97dbcdd] Running
	I1209 22:35:25.640106   26899 system_pods.go:126] duration metric: took 9.603358ms to wait for k8s-apps to be running ...
	I1209 22:35:25.640114   26899 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 22:35:25.640157   26899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:35:25.655968   26899 system_svc.go:56] duration metric: took 15.843283ms WaitForService to wait for kubelet
	I1209 22:35:25.655997   26899 kubeadm.go:582] duration metric: took 2m21.543718454s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 22:35:25.656027   26899 node_conditions.go:102] verifying NodePressure condition ...
	I1209 22:35:25.659154   26899 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:35:25.659181   26899 node_conditions.go:123] node cpu capacity is 2
	I1209 22:35:25.659197   26899 node_conditions.go:105] duration metric: took 3.165147ms to run NodePressure ...
	I1209 22:35:25.659210   26899 start.go:241] waiting for startup goroutines ...
	I1209 22:35:25.659225   26899 start.go:246] waiting for cluster config update ...
	I1209 22:35:25.659250   26899 start.go:255] writing updated cluster config ...
	I1209 22:35:25.659525   26899 ssh_runner.go:195] Run: rm -f paused
	I1209 22:35:25.708414   26899 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 22:35:25.711220   26899 out.go:177] * Done! kubectl is now configured to use "addons-495659" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.304292416Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]docker.io/kicbase/echo-server:1.0\" does not resolve to an image ID" file="storage/storage_reference.go:149"
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.305085647Z" level=debug msg="Using registries.d directory /etc/containers/registries.d" file="docker/registries_d.go:80"
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.305429745Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\"" file="docker/docker_image_src.go:87"
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.305507289Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /run/containers/0/auth.json" file="config/config.go:846"
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.305543538Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.config/containers/auth.json" file="config/config.go:846"
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.305608457Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.docker/config.json" file="config/config.go:846"
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.305656116Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.dockercfg" file="config/config.go:846"
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.305682614Z" level=debug msg="No credentials for docker.io/kicbase/echo-server found" file="config/config.go:272"
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.305712072Z" level=debug msg=" No signature storage configuration found for docker.io/kicbase/echo-server:1.0, using built-in default file:///var/lib/containers/sigstore" file="docker/registries_d.go:176"
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.305748955Z" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io" file="tlsclientconfig/tlsclientconfig.go:20"
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.305807884Z" level=debug msg="GET https://registry-1.docker.io/v2/" file="docker/docker_client.go:631"
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.317239241Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4a366b5-3a87-44be-8430-73f22e3b4b1c name=/runtime.v1.RuntimeService/Version
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.317310903Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4a366b5-3a87-44be-8430-73f22e3b4b1c name=/runtime.v1.RuntimeService/Version
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.318173842Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92d09599-fd18-4c28-a38f-7123c83fbd58 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.319341483Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733783905319314764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92d09599-fd18-4c28-a38f-7123c83fbd58 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.320071133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64985780-a46b-4383-a888-1ff4ac074512 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.320220315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64985780-a46b-4383-a888-1ff4ac074512 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.320784582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c441db11bc82d36cb58c484d982f2b33d9746e3cfb59ab7e50f44d2a8f82beed,PodSandboxId:e637581f75904b9e88a63f5f1514109b9b225e11b150c77d97496a39d80a8e24,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733783769253960753,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e3e4108-ab04-4c22-9a48-b5b6431d743f,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceb3922f9f55ef7df0a0d5801bdd4a62c5a19c8dd9e2473dd0472aefdeebe31,PodSandboxId:87389b2b153f07c1dccbc9f998efb0b10c01a817c5e5e8d2d894091b5896fb2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733783728876642001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 305d5fb6-4c01-480c-9f96-855b1c53733a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28e66371077f653252136d166d1742ac8639d1b9a7e4dd02d12934085377950,PodSandboxId:d9e6d5198fba2a459eafb3cd723b8ad729a76f85f1ea8adc6834972889cf6869,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733783664318172070,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-2h4z2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 12f5f3ba-8cfc-4366-99d1-1894640e4cd9,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ebc384e5c36d9d368779e4ea68009fe8a86861e7d9e8c1eed0e6925e0dcee583,PodSandboxId:5fec4041f0950fa03e67dbc3250a54c105299eefdc5c881e2c632948cdd9ebf5,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1733783645685428556,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9kssc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0e6c200f-ad27-4870-9685-9bb959b4f38e,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a0189c62eed2880b26194e68fc06eaf9d1a5e5b8ca4807ae1dd43e603b62a24,PodSandboxId:edce69c7de9e630bb140531afad0562e63bf42d38a0fe636a1fe09dd1eb97223,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733783645011845809,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2jqqf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1b0b1c94-0839-4f65-a9a9-2cb0c5227fa2,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1253a32a552c9653ab385d2f4f005ae0b45d6e44a29b4644f4165d28518de200,PodSandboxId:92a3cc43270e6193b5f72e174a4acecb10ad1c904c6ace4acf749cc538497d46,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733783633362603291,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-srq65,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: bc7e0933-639d-4e14-9285-644308291889,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199c999b39deffa7789b0eee1bc6ab3af1a3190deedcaf06522abe738cafc790,PodSandboxId:86a0860e2152fe0d9a304ce6c5625a736bdbcc772a5f8ab9c2d6b629f0063e2f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733783623675264185,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-drvs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 697234f5-8b91-4bd8-9d7a-681c7fd5c8b3,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac972c6ca8a2e0ccd932e682297bdc569dfc685d1ce8b88b78efc70298f3e4d,PodSandboxId:7529bf3e5c9afc3423fe4f4e149effa090097df337540b079d945a086719156f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attem
pt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733783602468855120,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-k9c92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ae134a6-d82f-4b75-adef-ebd11156ef7e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e201850e9fdd2ca599d1b648e2ba6378116f69efb78d09272894dd82cd887a,PodSandboxId:6e8378f8bdc25510d3ee18bafc946fd1c54abf3639896f7c1cb03d940138c758,Metadata:&ContainerM
etadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733783600021368674,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bccaa8d-e874-466c-96e6-476f10eab5b5,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c23be9fca0fc7f6d765c3552
106eab10700f65628eb7ccd437a3bcfa0d5b9a,PodSandboxId:f93fdf924f8309c3effe372036e1e64c865bf9c5159ee3dd30d342fbf027fad6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733783591387029260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9a6458-b9f3-47d5-af12-07b1a97dbcdd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db318df65ff7495e71d8db136b9900162fd87
e0838bec0ffc6ac018751dbd51,PodSandboxId:ab020f040f68b3cc195fc7eb1a9ffb524b923e471ceb2522b7054cfe0365ee67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733783588887036524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d7jm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8dad938-bb60-4879-907c-12003e131d8e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea,PodSandboxId:d28e9ffc762b0993c068c93892e6f5dfd007dea9dca19255d14d0e5d057c253d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733783585550402292,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x6vmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74e8d2a-5b4f-4e61-8783-167e45a70839,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb,PodSandboxId:ef109cc31f866b10c954e785db5445ee07e0101c1145ef2cc92cc2849ad037f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733783574087861658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94fbc14579012aee9e2dfe0c623f852b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c,PodSandboxId:2716276c108c2ec300f07d9d29435f94f1fc9402422ff2da602d63b66cb63f40,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733783574095421255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805474f477faf5b0e135611efdccbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.
kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9,PodSandboxId:a1fdb7b4361a8cac11b6187d63408377a794fe44997e45cd6bc0d84e75a962e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733783574077026813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d09ee7fe7fafb23694a51345ea774ec,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72,PodSandboxId:460a571ff8a02351cfe1cb4af70e89ff62082982704050dc4975bc0f9a09af96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733783574086955131,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 931dcf38217fed4a6c2f86f5eefa5ddb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuberne
tes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64985780-a46b-4383-a888-1ff4ac074512 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.354940329Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9731fa56-1395-4a66-9800-4966b79eb841 name=/runtime.v1.RuntimeService/Version
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.355023721Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9731fa56-1395-4a66-9800-4966b79eb841 name=/runtime.v1.RuntimeService/Version
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.356154056Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8993b5d-600f-4b05-857f-fc92b92a9870 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.357315215Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733783905357285053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8993b5d-600f-4b05-857f-fc92b92a9870 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.357873309Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83a0740c-f118-4f02-8958-b7d4d4700f90 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.357991314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83a0740c-f118-4f02-8958-b7d4d4700f90 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:38:25 addons-495659 crio[661]: time="2024-12-09 22:38:25.358362321Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c441db11bc82d36cb58c484d982f2b33d9746e3cfb59ab7e50f44d2a8f82beed,PodSandboxId:e637581f75904b9e88a63f5f1514109b9b225e11b150c77d97496a39d80a8e24,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733783769253960753,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e3e4108-ab04-4c22-9a48-b5b6431d743f,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceb3922f9f55ef7df0a0d5801bdd4a62c5a19c8dd9e2473dd0472aefdeebe31,PodSandboxId:87389b2b153f07c1dccbc9f998efb0b10c01a817c5e5e8d2d894091b5896fb2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733783728876642001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 305d5fb6-4c01-480c-9f96-855b1c53733a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28e66371077f653252136d166d1742ac8639d1b9a7e4dd02d12934085377950,PodSandboxId:d9e6d5198fba2a459eafb3cd723b8ad729a76f85f1ea8adc6834972889cf6869,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733783664318172070,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-2h4z2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 12f5f3ba-8cfc-4366-99d1-1894640e4cd9,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ebc384e5c36d9d368779e4ea68009fe8a86861e7d9e8c1eed0e6925e0dcee583,PodSandboxId:5fec4041f0950fa03e67dbc3250a54c105299eefdc5c881e2c632948cdd9ebf5,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1733783645685428556,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9kssc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0e6c200f-ad27-4870-9685-9bb959b4f38e,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a0189c62eed2880b26194e68fc06eaf9d1a5e5b8ca4807ae1dd43e603b62a24,PodSandboxId:edce69c7de9e630bb140531afad0562e63bf42d38a0fe636a1fe09dd1eb97223,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733783645011845809,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2jqqf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1b0b1c94-0839-4f65-a9a9-2cb0c5227fa2,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1253a32a552c9653ab385d2f4f005ae0b45d6e44a29b4644f4165d28518de200,PodSandboxId:92a3cc43270e6193b5f72e174a4acecb10ad1c904c6ace4acf749cc538497d46,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733783633362603291,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-srq65,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: bc7e0933-639d-4e14-9285-644308291889,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199c999b39deffa7789b0eee1bc6ab3af1a3190deedcaf06522abe738cafc790,PodSandboxId:86a0860e2152fe0d9a304ce6c5625a736bdbcc772a5f8ab9c2d6b629f0063e2f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733783623675264185,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-drvs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 697234f5-8b91-4bd8-9d7a-681c7fd5c8b3,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac972c6ca8a2e0ccd932e682297bdc569dfc685d1ce8b88b78efc70298f3e4d,PodSandboxId:7529bf3e5c9afc3423fe4f4e149effa090097df337540b079d945a086719156f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attem
pt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733783602468855120,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-k9c92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ae134a6-d82f-4b75-adef-ebd11156ef7e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e201850e9fdd2ca599d1b648e2ba6378116f69efb78d09272894dd82cd887a,PodSandboxId:6e8378f8bdc25510d3ee18bafc946fd1c54abf3639896f7c1cb03d940138c758,Metadata:&ContainerM
etadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733783600021368674,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bccaa8d-e874-466c-96e6-476f10eab5b5,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c23be9fca0fc7f6d765c3552
106eab10700f65628eb7ccd437a3bcfa0d5b9a,PodSandboxId:f93fdf924f8309c3effe372036e1e64c865bf9c5159ee3dd30d342fbf027fad6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733783591387029260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9a6458-b9f3-47d5-af12-07b1a97dbcdd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db318df65ff7495e71d8db136b9900162fd87
e0838bec0ffc6ac018751dbd51,PodSandboxId:ab020f040f68b3cc195fc7eb1a9ffb524b923e471ceb2522b7054cfe0365ee67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733783588887036524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d7jm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8dad938-bb60-4879-907c-12003e131d8e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea,PodSandboxId:d28e9ffc762b0993c068c93892e6f5dfd007dea9dca19255d14d0e5d057c253d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733783585550402292,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x6vmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74e8d2a-5b4f-4e61-8783-167e45a70839,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb,PodSandboxId:ef109cc31f866b10c954e785db5445ee07e0101c1145ef2cc92cc2849ad037f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733783574087861658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94fbc14579012aee9e2dfe0c623f852b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c,PodSandboxId:2716276c108c2ec300f07d9d29435f94f1fc9402422ff2da602d63b66cb63f40,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733783574095421255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805474f477faf5b0e135611efdccbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.
kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9,PodSandboxId:a1fdb7b4361a8cac11b6187d63408377a794fe44997e45cd6bc0d84e75a962e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733783574077026813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d09ee7fe7fafb23694a51345ea774ec,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72,PodSandboxId:460a571ff8a02351cfe1cb4af70e89ff62082982704050dc4975bc0f9a09af96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733783574086955131,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 931dcf38217fed4a6c2f86f5eefa5ddb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuberne
tes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83a0740c-f118-4f02-8958-b7d4d4700f90 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c441db11bc82d       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                              2 minutes ago       Running             nginx                     0                   e637581f75904       nginx
	fceb3922f9f55       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   87389b2b153f0       busybox
	e28e66371077f       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             4 minutes ago       Running             controller                0                   d9e6d5198fba2       ingress-nginx-controller-5f85ff4588-2h4z2
	ebc384e5c36d9       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             4 minutes ago       Exited              patch                     1                   5fec4041f0950       ingress-nginx-admission-patch-9kssc
	1a0189c62eed2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   edce69c7de9e6       ingress-nginx-admission-create-2jqqf
	1253a32a552c9       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   92a3cc43270e6       local-path-provisioner-86d989889c-srq65
	199c999b39def       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago       Running             metrics-server            0                   86a0860e2152f       metrics-server-84c5f94fbc-drvs4
	9ac972c6ca8a2       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   7529bf3e5c9af       amd-gpu-device-plugin-k9c92
	d8e201850e9fd       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             5 minutes ago       Running             minikube-ingress-dns      0                   6e8378f8bdc25       kube-ingress-dns-minikube
	e0c23be9fca0f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   f93fdf924f830       storage-provisioner
	0db318df65ff7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             5 minutes ago       Running             coredns                   0                   ab020f040f68b       coredns-7c65d6cfc9-d7jm7
	03167612b8d46       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             5 minutes ago       Running             kube-proxy                0                   d28e9ffc762b0       kube-proxy-x6vmt
	4d807bef69ecb       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   2716276c108c2       etcd-addons-495659
	69519752b978b       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             5 minutes ago       Running             kube-scheduler            0                   ef109cc31f866       kube-scheduler-addons-495659
	0a7dd6f001e51       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             5 minutes ago       Running             kube-apiserver            0                   460a571ff8a02       kube-apiserver-addons-495659
	3ce76bec56eb8       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             5 minutes ago       Running             kube-controller-manager   0                   a1fdb7b4361a8       kube-controller-manager-addons-495659
	
	
	==> coredns [0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51] <==
	[INFO] 10.244.0.7:47990 - 58575 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000172037s
	[INFO] 10.244.0.7:47990 - 13098 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000101156s
	[INFO] 10.244.0.7:47990 - 7580 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00006743s
	[INFO] 10.244.0.7:47990 - 15260 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000115941s
	[INFO] 10.244.0.7:47990 - 30273 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000094487s
	[INFO] 10.244.0.7:47990 - 52660 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000078603s
	[INFO] 10.244.0.7:47990 - 2407 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000065386s
	[INFO] 10.244.0.7:40393 - 22610 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000114997s
	[INFO] 10.244.0.7:40393 - 22882 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000097794s
	[INFO] 10.244.0.7:35334 - 5523 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000052785s
	[INFO] 10.244.0.7:35334 - 5743 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038717s
	[INFO] 10.244.0.7:57980 - 26139 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000033915s
	[INFO] 10.244.0.7:57980 - 26377 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061745s
	[INFO] 10.244.0.7:55043 - 29056 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000035511s
	[INFO] 10.244.0.7:55043 - 28890 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00006763s
	[INFO] 10.244.0.23:55555 - 59965 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00033797s
	[INFO] 10.244.0.23:47152 - 32458 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000414177s
	[INFO] 10.244.0.23:53394 - 18242 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000175034s
	[INFO] 10.244.0.23:44839 - 32006 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096316s
	[INFO] 10.244.0.23:47602 - 35547 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096487s
	[INFO] 10.244.0.23:49701 - 13166 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000084639s
	[INFO] 10.244.0.23:42024 - 10084 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000663232s
	[INFO] 10.244.0.23:47020 - 5200 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001023179s
	[INFO] 10.244.0.27:43398 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000600625s
	[INFO] 10.244.0.27:37751 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000144784s
	
	
	==> describe nodes <==
	Name:               addons-495659
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-495659
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=addons-495659
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T22_32_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-495659
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 22:32:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-495659
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 22:38:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 22:36:32 +0000   Mon, 09 Dec 2024 22:32:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 22:36:32 +0000   Mon, 09 Dec 2024 22:32:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 22:36:32 +0000   Mon, 09 Dec 2024 22:32:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 22:36:32 +0000   Mon, 09 Dec 2024 22:33:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    addons-495659
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f7fc1d23c4447a6b647c74af79ff52c
	  System UUID:                6f7fc1d2-3c44-47a6-b647-c74af79ff52c
	  Boot ID:                    e0437aa1-375f-4d05-8d44-cfd4e70449ec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  default                     hello-world-app-55bf9c44b4-r8srv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-2h4z2    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m13s
	  kube-system                 amd-gpu-device-plugin-k9c92                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 coredns-7c65d6cfc9-d7jm7                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m21s
	  kube-system                 etcd-addons-495659                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m26s
	  kube-system                 kube-apiserver-addons-495659                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-controller-manager-addons-495659        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-proxy-x6vmt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-scheduler-addons-495659                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 metrics-server-84c5f94fbc-drvs4              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         5m17s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  local-path-storage          local-path-provisioner-86d989889c-srq65      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m18s  kube-proxy       
	  Normal  Starting                 5m26s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m26s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m26s  kubelet          Node addons-495659 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m26s  kubelet          Node addons-495659 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m26s  kubelet          Node addons-495659 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m25s  kubelet          Node addons-495659 status is now: NodeReady
	  Normal  RegisteredNode           5m22s  node-controller  Node addons-495659 event: Registered Node addons-495659 in Controller
	
	
	==> dmesg <==
	[  +5.990700] systemd-fstab-generator[1200]: Ignoring "noauto" option for root device
	[  +0.088011] kauditd_printk_skb: 69 callbacks suppressed
	[Dec 9 22:33] systemd-fstab-generator[1335]: Ignoring "noauto" option for root device
	[  +0.167220] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.010973] kauditd_printk_skb: 116 callbacks suppressed
	[  +5.272101] kauditd_printk_skb: 131 callbacks suppressed
	[  +5.271276] kauditd_printk_skb: 71 callbacks suppressed
	[ +14.489459] kauditd_printk_skb: 15 callbacks suppressed
	[ +10.084986] kauditd_printk_skb: 4 callbacks suppressed
	[ +13.309084] kauditd_printk_skb: 2 callbacks suppressed
	[Dec 9 22:34] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.064148] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.161304] kauditd_printk_skb: 42 callbacks suppressed
	[  +9.179112] kauditd_printk_skb: 22 callbacks suppressed
	[ +13.874383] kauditd_printk_skb: 18 callbacks suppressed
	[Dec 9 22:35] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.115483] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.055871] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.326466] kauditd_printk_skb: 34 callbacks suppressed
	[Dec 9 22:36] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.089690] kauditd_printk_skb: 64 callbacks suppressed
	[ +12.876936] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.667611] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.459316] kauditd_printk_skb: 10 callbacks suppressed
	[Dec 9 22:38] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c] <==
	{"level":"info","ts":"2024-12-09T22:34:23.570162Z","caller":"traceutil/trace.go:171","msg":"trace[308595677] linearizableReadLoop","detail":"{readStateIndex:1132; appliedIndex:1131; }","duration":"303.724239ms","start":"2024-12-09T22:34:23.266422Z","end":"2024-12-09T22:34:23.570147Z","steps":["trace[308595677] 'read index received'  (duration: 303.634457ms)","trace[308595677] 'applied index is now lower than readState.Index'  (duration: 89.163µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T22:34:23.570305Z","caller":"traceutil/trace.go:171","msg":"trace[1535974128] transaction","detail":"{read_only:false; response_revision:1102; number_of_response:1; }","duration":"310.987393ms","start":"2024-12-09T22:34:23.259310Z","end":"2024-12-09T22:34:23.570297Z","steps":["trace[1535974128] 'process raft request'  (duration: 310.738516ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T22:34:23.570404Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"293.628697ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T22:34:23.570458Z","caller":"traceutil/trace.go:171","msg":"trace[1597284253] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1102; }","duration":"293.690451ms","start":"2024-12-09T22:34:23.276759Z","end":"2024-12-09T22:34:23.570449Z","steps":["trace[1597284253] 'agreement among raft nodes before linearized reading'  (duration: 293.594454ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T22:34:23.570424Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T22:34:23.259295Z","time spent":"311.051041ms","remote":"127.0.0.1:34386","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":482,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1090 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:419 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-12-09T22:34:23.570657Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"304.244691ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-drvs4\" ","response":"range_response_count:1 size:4566"}
	{"level":"info","ts":"2024-12-09T22:34:23.570689Z","caller":"traceutil/trace.go:171","msg":"trace[179489460] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-drvs4; range_end:; response_count:1; response_revision:1102; }","duration":"304.278862ms","start":"2024-12-09T22:34:23.266405Z","end":"2024-12-09T22:34:23.570684Z","steps":["trace[179489460] 'agreement among raft nodes before linearized reading'  (duration: 304.167495ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T22:34:23.570708Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T22:34:23.266350Z","time spent":"304.35291ms","remote":"127.0.0.1:34300","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4589,"request content":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-drvs4\" "}
	{"level":"warn","ts":"2024-12-09T22:34:23.570801Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.524643ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-12-09T22:34:23.570831Z","caller":"traceutil/trace.go:171","msg":"trace[1380735404] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotclasses0; response_count:0; response_revision:1102; }","duration":"186.556079ms","start":"2024-12-09T22:34:23.384269Z","end":"2024-12-09T22:34:23.570825Z","steps":["trace[1380735404] 'agreement among raft nodes before linearized reading'  (duration: 186.513114ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T22:34:55.917575Z","caller":"traceutil/trace.go:171","msg":"trace[1843404297] transaction","detail":"{read_only:false; response_revision:1203; number_of_response:1; }","duration":"124.587896ms","start":"2024-12-09T22:34:55.792973Z","end":"2024-12-09T22:34:55.917561Z","steps":["trace[1843404297] 'process raft request'  (duration: 124.280417ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T22:36:09.150831Z","caller":"traceutil/trace.go:171","msg":"trace[1874001647] linearizableReadLoop","detail":"{readStateIndex:1634; appliedIndex:1633; }","duration":"102.512757ms","start":"2024-12-09T22:36:09.048305Z","end":"2024-12-09T22:36:09.150818Z","steps":["trace[1874001647] 'read index received'  (duration: 102.385844ms)","trace[1874001647] 'applied index is now lower than readState.Index'  (duration: 126.23µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T22:36:09.151186Z","caller":"traceutil/trace.go:171","msg":"trace[1647848914] transaction","detail":"{read_only:false; response_revision:1572; number_of_response:1; }","duration":"356.141011ms","start":"2024-12-09T22:36:08.795035Z","end":"2024-12-09T22:36:09.151176Z","steps":["trace[1647848914] 'process raft request'  (duration: 355.699702ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T22:36:09.152090Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T22:36:08.795022Z","time spent":"357.01071ms","remote":"127.0.0.1:34386","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1532 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-12-09T22:36:09.151284Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.970403ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T22:36:09.152563Z","caller":"traceutil/trace.go:171","msg":"trace[2107284344] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1572; }","duration":"104.255936ms","start":"2024-12-09T22:36:09.048294Z","end":"2024-12-09T22:36:09.152550Z","steps":["trace[2107284344] 'agreement among raft nodes before linearized reading'  (duration: 102.958741ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T22:36:24.244318Z","caller":"traceutil/trace.go:171","msg":"trace[1180422457] linearizableReadLoop","detail":"{readStateIndex:1703; appliedIndex:1702; }","duration":"197.090968ms","start":"2024-12-09T22:36:24.047214Z","end":"2024-12-09T22:36:24.244304Z","steps":["trace[1180422457] 'read index received'  (duration: 196.933384ms)","trace[1180422457] 'applied index is now lower than readState.Index'  (duration: 156.98µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T22:36:24.244397Z","caller":"traceutil/trace.go:171","msg":"trace[239729648] transaction","detail":"{read_only:false; response_revision:1638; number_of_response:1; }","duration":"210.441458ms","start":"2024-12-09T22:36:24.033942Z","end":"2024-12-09T22:36:24.244383Z","steps":["trace[239729648] 'process raft request'  (duration: 210.226298ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T22:36:24.244431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.204147ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T22:36:24.244451Z","caller":"traceutil/trace.go:171","msg":"trace[317081139] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1638; }","duration":"197.23819ms","start":"2024-12-09T22:36:24.047207Z","end":"2024-12-09T22:36:24.244446Z","steps":["trace[317081139] 'agreement among raft nodes before linearized reading'  (duration: 197.170923ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T22:36:50.834752Z","caller":"traceutil/trace.go:171","msg":"trace[442177980] linearizableReadLoop","detail":"{readStateIndex:1915; appliedIndex:1914; }","duration":"201.753591ms","start":"2024-12-09T22:36:50.632985Z","end":"2024-12-09T22:36:50.834738Z","steps":["trace[442177980] 'read index received'  (duration: 201.627655ms)","trace[442177980] 'applied index is now lower than readState.Index'  (duration: 125.515µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T22:36:50.834860Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.874237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/external-resizer-runner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T22:36:50.834879Z","caller":"traceutil/trace.go:171","msg":"trace[506953096] range","detail":"{range_begin:/registry/clusterroles/external-resizer-runner; range_end:; response_count:0; response_revision:1836; }","duration":"201.911043ms","start":"2024-12-09T22:36:50.632963Z","end":"2024-12-09T22:36:50.834874Z","steps":["trace[506953096] 'agreement among raft nodes before linearized reading'  (duration: 201.832636ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T22:36:50.835029Z","caller":"traceutil/trace.go:171","msg":"trace[253249298] transaction","detail":"{read_only:false; response_revision:1836; number_of_response:1; }","duration":"366.879696ms","start":"2024-12-09T22:36:50.468137Z","end":"2024-12-09T22:36:50.835017Z","steps":["trace[253249298] 'process raft request'  (duration: 366.516212ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T22:36:50.835111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T22:36:50.468122Z","time spent":"366.939864ms","remote":"127.0.0.1:34280","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1828 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 22:38:25 up 6 min,  0 users,  load average: 0.21, 0.75, 0.44
	Linux addons-495659 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72] <==
	 > logger="UnhandledError"
	E1209 22:34:49.332523       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.45.9:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.45.9:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.45.9:443: connect: connection refused" logger="UnhandledError"
	E1209 22:34:49.334515       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.45.9:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.45.9:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.45.9:443: connect: connection refused" logger="UnhandledError"
	E1209 22:34:49.339449       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.45.9:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.45.9:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.45.9:443: connect: connection refused" logger="UnhandledError"
	I1209 22:34:49.408933       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1209 22:35:35.444988       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:45530: use of closed network connection
	E1209 22:35:35.623097       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:45564: use of closed network connection
	I1209 22:35:44.708487       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.31.146"}
	I1209 22:36:04.882928       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1209 22:36:05.066047       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.176.249"}
	I1209 22:36:09.719969       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1209 22:36:10.750017       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1209 22:36:31.953190       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1209 22:36:46.514310       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 22:36:46.514369       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 22:36:46.556592       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 22:36:46.556639       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 22:36:46.575888       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 22:36:46.576476       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 22:36:46.604405       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 22:36:46.604496       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1209 22:36:47.557777       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1209 22:36:47.605190       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1209 22:36:47.702157       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1209 22:38:24.207303       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.254.205"}
	
	
	==> kube-controller-manager [3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9] <==
	I1209 22:37:03.673993       1 shared_informer.go:320] Caches are synced for resource quota
	I1209 22:37:04.106741       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1209 22:37:04.106855       1 shared_informer.go:320] Caches are synced for garbage collector
	W1209 22:37:07.255166       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:37:07.255235       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:37:17.539598       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:37:17.539652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:37:23.674977       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:37:23.675117       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:37:25.204111       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:37:25.204159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:37:25.633620       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:37:25.633736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:37:49.486766       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:37:49.486943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:37:54.935869       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:37:54.935942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:38:11.173123       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:38:11.173178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:38:16.462940       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:38:16.463078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1209 22:38:24.027346       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="37.052667ms"
	I1209 22:38:24.036563       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.065225ms"
	I1209 22:38:24.037040       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="51.812µs"
	I1209 22:38:24.047179       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="43.582µs"
	
	
	==> kube-proxy [03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 22:33:06.610078       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 22:33:06.649856       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.123"]
	E1209 22:33:06.650053       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 22:33:06.756286       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 22:33:06.756316       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 22:33:06.756342       1 server_linux.go:169] "Using iptables Proxier"
	I1209 22:33:06.769170       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 22:33:06.769425       1 server.go:483] "Version info" version="v1.31.2"
	I1209 22:33:06.769473       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 22:33:06.775071       1 config.go:199] "Starting service config controller"
	I1209 22:33:06.775086       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 22:33:06.775116       1 config.go:105] "Starting endpoint slice config controller"
	I1209 22:33:06.775120       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 22:33:06.775556       1 config.go:328] "Starting node config controller"
	I1209 22:33:06.775564       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 22:33:06.888291       1 shared_informer.go:320] Caches are synced for node config
	I1209 22:33:06.888335       1 shared_informer.go:320] Caches are synced for service config
	I1209 22:33:06.888393       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb] <==
	W1209 22:32:57.446670       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1209 22:32:57.446776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.480981       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1209 22:32:57.481086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.533169       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 22:32:57.533324       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1209 22:32:57.603548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1209 22:32:57.603814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.628473       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1209 22:32:57.628580       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.658486       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1209 22:32:57.658642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.750668       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 22:32:57.750800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.801744       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 22:32:57.802054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.815374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1209 22:32:57.815465       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.826887       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1209 22:32:57.826986       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.908281       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1209 22:32:57.908334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.934772       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 22:32:57.934968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1209 22:32:59.710371       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 22:38:19 addons-495659 kubelet[1207]: E1209 22:38:19.482188    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733783899481800875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:38:19 addons-495659 kubelet[1207]: E1209 22:38:19.482224    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733783899481800875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: E1209 22:38:24.017332    1207 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d710134-71c5-4a26-86cd-f58e421e155c" containerName="csi-snapshotter"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: E1209 22:38:24.017704    1207 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ef3c979-b020-4950-835f-4960308d5a38" containerName="volume-snapshot-controller"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: E1209 22:38:24.017788    1207 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d710134-71c5-4a26-86cd-f58e421e155c" containerName="csi-external-health-monitor-controller"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: E1209 22:38:24.017822    1207 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d710134-71c5-4a26-86cd-f58e421e155c" containerName="node-driver-registrar"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: E1209 22:38:24.017882    1207 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d710134-71c5-4a26-86cd-f58e421e155c" containerName="csi-provisioner"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: E1209 22:38:24.017970    1207 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9df0b766-98a8-45e9-a41a-b2d57a6f0b69" containerName="csi-attacher"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: E1209 22:38:24.018009    1207 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d710134-71c5-4a26-86cd-f58e421e155c" containerName="hostpath"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: E1209 22:38:24.018074    1207 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b9c7557-95a8-4767-8ae9-5765b9249de1" containerName="csi-resizer"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: E1209 22:38:24.018106    1207 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="96a1edd3-1afc-4328-804d-8e1a4b5c0655" containerName="volume-snapshot-controller"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: E1209 22:38:24.018177    1207 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b451504a-35d1-4bd8-bdda-759aeb5a6b39" containerName="task-pv-container"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: E1209 22:38:24.018210    1207 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d710134-71c5-4a26-86cd-f58e421e155c" containerName="liveness-probe"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: I1209 22:38:24.018317    1207 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d710134-71c5-4a26-86cd-f58e421e155c" containerName="hostpath"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: I1209 22:38:24.018387    1207 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df0b766-98a8-45e9-a41a-b2d57a6f0b69" containerName="csi-attacher"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: I1209 22:38:24.018418    1207 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ef3c979-b020-4950-835f-4960308d5a38" containerName="volume-snapshot-controller"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: I1209 22:38:24.018480    1207 memory_manager.go:354] "RemoveStaleState removing state" podUID="96a1edd3-1afc-4328-804d-8e1a4b5c0655" containerName="volume-snapshot-controller"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: I1209 22:38:24.018510    1207 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b9c7557-95a8-4767-8ae9-5765b9249de1" containerName="csi-resizer"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: I1209 22:38:24.018577    1207 memory_manager.go:354] "RemoveStaleState removing state" podUID="b451504a-35d1-4bd8-bdda-759aeb5a6b39" containerName="task-pv-container"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: I1209 22:38:24.018607    1207 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d710134-71c5-4a26-86cd-f58e421e155c" containerName="csi-external-health-monitor-controller"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: I1209 22:38:24.018671    1207 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d710134-71c5-4a26-86cd-f58e421e155c" containerName="node-driver-registrar"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: I1209 22:38:24.018701    1207 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d710134-71c5-4a26-86cd-f58e421e155c" containerName="liveness-probe"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: I1209 22:38:24.018765    1207 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d710134-71c5-4a26-86cd-f58e421e155c" containerName="csi-provisioner"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: I1209 22:38:24.018795    1207 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d710134-71c5-4a26-86cd-f58e421e155c" containerName="csi-snapshotter"
	Dec 09 22:38:24 addons-495659 kubelet[1207]: I1209 22:38:24.139510    1207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcm65\" (UniqueName: \"kubernetes.io/projected/296e2cce-48cc-4570-a9b7-bdd8f1dcc383-kube-api-access-rcm65\") pod \"hello-world-app-55bf9c44b4-r8srv\" (UID: \"296e2cce-48cc-4570-a9b7-bdd8f1dcc383\") " pod="default/hello-world-app-55bf9c44b4-r8srv"
	
	
	==> storage-provisioner [e0c23be9fca0fc7f6d765c3552106eab10700f65628eb7ccd437a3bcfa0d5b9a] <==
	I1209 22:33:12.333738       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 22:33:12.371657       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 22:33:12.371702       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 22:33:12.442821       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 22:33:12.458069       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2b4e2601-9884-4453-b8b1-6d90190db87b", APIVersion:"v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-495659_d64f94cb-e5cc-490e-bda7-7b4bf7f40e77 became leader
	I1209 22:33:12.472458       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-495659_d64f94cb-e5cc-490e-bda7-7b4bf7f40e77!
	I1209 22:33:12.572714       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-495659_d64f94cb-e5cc-490e-bda7-7b4bf7f40e77!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-495659 -n addons-495659
helpers_test.go:261: (dbg) Run:  kubectl --context addons-495659 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-r8srv ingress-nginx-admission-create-2jqqf ingress-nginx-admission-patch-9kssc
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-495659 describe pod hello-world-app-55bf9c44b4-r8srv ingress-nginx-admission-create-2jqqf ingress-nginx-admission-patch-9kssc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-495659 describe pod hello-world-app-55bf9c44b4-r8srv ingress-nginx-admission-create-2jqqf ingress-nginx-admission-patch-9kssc: exit status 1 (73.700031ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-r8srv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-495659/192.168.39.123
	Start Time:       Mon, 09 Dec 2024 22:38:24 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rcm65 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rcm65:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-r8srv to addons-495659
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2jqqf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9kssc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-495659 describe pod hello-world-app-55bf9c44b4-r8srv ingress-nginx-admission-create-2jqqf ingress-nginx-admission-patch-9kssc: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-495659 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-495659 addons disable ingress-dns --alsologtostderr -v=1: (1.716033743s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-495659 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-495659 addons disable ingress --alsologtostderr -v=1: (7.69384638s)
--- FAIL: TestAddons/parallel/Ingress (151.34s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (351.67s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.662216ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-drvs4" [697234f5-8b91-4bd8-9d7a-681c7fd5c8b3] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.061425581s
addons_test.go:402: (dbg) Run:  kubectl --context addons-495659 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-495659 top pods -n kube-system: exit status 1 (109.738044ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-k9c92, age: 3m0.304479894s

                                                
                                                
** /stderr **
I1209 22:36:06.306576   26253 retry.go:31] will retry after 1.913107185s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-495659 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-495659 top pods -n kube-system: exit status 1 (65.85202ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-k9c92, age: 3m2.284007567s

                                                
                                                
** /stderr **
I1209 22:36:08.286133   26253 retry.go:31] will retry after 4.506415155s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-495659 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-495659 top pods -n kube-system: exit status 1 (66.679507ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-k9c92, age: 3m6.858189719s

                                                
                                                
** /stderr **
I1209 22:36:12.859854   26253 retry.go:31] will retry after 7.776468867s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-495659 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-495659 top pods -n kube-system: exit status 1 (67.099616ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-k9c92, age: 3m14.702653605s

                                                
                                                
** /stderr **
I1209 22:36:20.704437   26253 retry.go:31] will retry after 15.00198582s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-495659 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-495659 top pods -n kube-system: exit status 1 (63.031088ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-k9c92, age: 3m29.768440535s

                                                
                                                
** /stderr **
I1209 22:36:35.770130   26253 retry.go:31] will retry after 16.781723488s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-495659 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-495659 top pods -n kube-system: exit status 1 (61.614201ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-k9c92, age: 3m46.616510004s

                                                
                                                
** /stderr **
I1209 22:36:52.618030   26253 retry.go:31] will retry after 16.203201653s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-495659 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-495659 top pods -n kube-system: exit status 1 (66.108851ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-k9c92, age: 4m2.888109341s

                                                
                                                
** /stderr **
I1209 22:37:08.889835   26253 retry.go:31] will retry after 38.564639121s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-495659 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-495659 top pods -n kube-system: exit status 1 (61.548135ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-k9c92, age: 4m41.515388839s

                                                
                                                
** /stderr **
I1209 22:37:47.517366   26253 retry.go:31] will retry after 51.754914038s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-495659 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-495659 top pods -n kube-system: exit status 1 (62.293123ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-k9c92, age: 5m33.334913297s

                                                
                                                
** /stderr **
I1209 22:38:39.336835   26253 retry.go:31] will retry after 37.640377001s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-495659 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-495659 top pods -n kube-system: exit status 1 (61.012993ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-k9c92, age: 6m11.036995163s

                                                
                                                
** /stderr **
I1209 22:39:17.038861   26253 retry.go:31] will retry after 46.043535763s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-495659 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-495659 top pods -n kube-system: exit status 1 (64.65917ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-k9c92, age: 6m57.145594378s

                                                
                                                
** /stderr **
I1209 22:40:03.147474   26253 retry.go:31] will retry after 1m10.705875832s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-495659 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-495659 top pods -n kube-system: exit status 1 (61.256114ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-k9c92, age: 8m7.919389489s

                                                
                                                
** /stderr **
I1209 22:41:13.921515   26253 retry.go:31] will retry after 36.331236125s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-495659 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-495659 top pods -n kube-system: exit status 1 (61.94422ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-k9c92, age: 8m44.316275779s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-495659 -n addons-495659
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-495659 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-495659 logs -n 25: (1.155359799s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-091652                                                                     | download-only-091652 | jenkins | v1.34.0 | 09 Dec 24 22:32 UTC | 09 Dec 24 22:32 UTC |
	| delete  | -p download-only-578923                                                                     | download-only-578923 | jenkins | v1.34.0 | 09 Dec 24 22:32 UTC | 09 Dec 24 22:32 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-501847 | jenkins | v1.34.0 | 09 Dec 24 22:32 UTC |                     |
	|         | binary-mirror-501847                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39247                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-501847                                                                     | binary-mirror-501847 | jenkins | v1.34.0 | 09 Dec 24 22:32 UTC | 09 Dec 24 22:32 UTC |
	| addons  | enable dashboard -p                                                                         | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:32 UTC |                     |
	|         | addons-495659                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:32 UTC |                     |
	|         | addons-495659                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-495659 --wait=true                                                                | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:32 UTC | 09 Dec 24 22:35 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-495659 addons disable                                                                | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:35 UTC | 09 Dec 24 22:35 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-495659 addons disable                                                                | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:35 UTC | 09 Dec 24 22:35 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:35 UTC | 09 Dec 24 22:35 UTC |
	|         | -p addons-495659                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-495659 addons disable                                                                | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:35 UTC | 09 Dec 24 22:35 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-495659 addons                                                                        | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:35 UTC | 09 Dec 24 22:35 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-495659 addons disable                                                                | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:35 UTC | 09 Dec 24 22:36 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-495659 ip                                                                            | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:36 UTC | 09 Dec 24 22:36 UTC |
	| addons  | addons-495659 addons disable                                                                | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:36 UTC | 09 Dec 24 22:36 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-495659 addons                                                                        | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:36 UTC | 09 Dec 24 22:36 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-495659 ssh cat                                                                       | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:36 UTC | 09 Dec 24 22:36 UTC |
	|         | /opt/local-path-provisioner/pvc-d50f59cb-64dd-4a2e-b94c-429fc96e21da_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-495659 addons disable                                                                | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:36 UTC | 09 Dec 24 22:36 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-495659 addons                                                                        | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:36 UTC | 09 Dec 24 22:36 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-495659 ssh curl -s                                                                   | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:36 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-495659 addons                                                                        | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:36 UTC | 09 Dec 24 22:36 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-495659 addons                                                                        | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:36 UTC | 09 Dec 24 22:36 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-495659 ip                                                                            | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:38 UTC | 09 Dec 24 22:38 UTC |
	| addons  | addons-495659 addons disable                                                                | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:38 UTC | 09 Dec 24 22:38 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-495659 addons disable                                                                | addons-495659        | jenkins | v1.34.0 | 09 Dec 24 22:38 UTC | 09 Dec 24 22:38 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 22:32:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 22:32:17.751936   26899 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:32:17.752493   26899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:32:17.752512   26899 out.go:358] Setting ErrFile to fd 2...
	I1209 22:32:17.752519   26899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:32:17.752941   26899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 22:32:17.753776   26899 out.go:352] Setting JSON to false
	I1209 22:32:17.754593   26899 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4489,"bootTime":1733779049,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 22:32:17.754691   26899 start.go:139] virtualization: kvm guest
	I1209 22:32:17.756536   26899 out.go:177] * [addons-495659] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 22:32:17.758116   26899 notify.go:220] Checking for updates...
	I1209 22:32:17.758129   26899 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 22:32:17.759348   26899 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 22:32:17.760632   26899 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:32:17.761843   26899 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:32:17.763110   26899 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 22:32:17.764437   26899 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 22:32:17.765761   26899 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 22:32:17.797653   26899 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 22:32:17.798937   26899 start.go:297] selected driver: kvm2
	I1209 22:32:17.798949   26899 start.go:901] validating driver "kvm2" against <nil>
	I1209 22:32:17.798960   26899 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 22:32:17.799752   26899 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:32:17.799867   26899 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 22:32:17.814913   26899 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 22:32:17.814964   26899 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 22:32:17.815244   26899 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 22:32:17.815276   26899 cni.go:84] Creating CNI manager for ""
	I1209 22:32:17.815336   26899 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 22:32:17.815348   26899 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 22:32:17.815418   26899 start.go:340] cluster config:
	{Name:addons-495659 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-495659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:32:17.815986   26899 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:32:17.817830   26899 out.go:177] * Starting "addons-495659" primary control-plane node in "addons-495659" cluster
	I1209 22:32:17.819404   26899 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:32:17.819437   26899 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 22:32:17.819447   26899 cache.go:56] Caching tarball of preloaded images
	I1209 22:32:17.819509   26899 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 22:32:17.819526   26899 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 22:32:17.819824   26899 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/config.json ...
	I1209 22:32:17.819847   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/config.json: {Name:mk956352758e5b2bd9e07f8704d8de74b0230bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:17.819991   26899 start.go:360] acquireMachinesLock for addons-495659: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 22:32:17.820036   26899 start.go:364] duration metric: took 31.824µs to acquireMachinesLock for "addons-495659"
	I1209 22:32:17.820053   26899 start.go:93] Provisioning new machine with config: &{Name:addons-495659 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-495659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:32:17.820112   26899 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 22:32:17.822592   26899 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1209 22:32:17.822749   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:32:17.822781   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:32:17.837004   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38047
	I1209 22:32:17.837441   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:32:17.838048   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:32:17.838064   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:32:17.838468   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:32:17.838654   26899 main.go:141] libmachine: (addons-495659) Calling .GetMachineName
	I1209 22:32:17.838802   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:32:17.838938   26899 start.go:159] libmachine.API.Create for "addons-495659" (driver="kvm2")
	I1209 22:32:17.838961   26899 client.go:168] LocalClient.Create starting
	I1209 22:32:17.839002   26899 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem
	I1209 22:32:17.966324   26899 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem
	I1209 22:32:18.100691   26899 main.go:141] libmachine: Running pre-create checks...
	I1209 22:32:18.100722   26899 main.go:141] libmachine: (addons-495659) Calling .PreCreateCheck
	I1209 22:32:18.101180   26899 main.go:141] libmachine: (addons-495659) Calling .GetConfigRaw
	I1209 22:32:18.101590   26899 main.go:141] libmachine: Creating machine...
	I1209 22:32:18.101605   26899 main.go:141] libmachine: (addons-495659) Calling .Create
	I1209 22:32:18.101776   26899 main.go:141] libmachine: (addons-495659) Creating KVM machine...
	I1209 22:32:18.103072   26899 main.go:141] libmachine: (addons-495659) DBG | found existing default KVM network
	I1209 22:32:18.103817   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:18.103669   26921 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
	I1209 22:32:18.103837   26899 main.go:141] libmachine: (addons-495659) DBG | created network xml: 
	I1209 22:32:18.103846   26899 main.go:141] libmachine: (addons-495659) DBG | <network>
	I1209 22:32:18.103851   26899 main.go:141] libmachine: (addons-495659) DBG |   <name>mk-addons-495659</name>
	I1209 22:32:18.103859   26899 main.go:141] libmachine: (addons-495659) DBG |   <dns enable='no'/>
	I1209 22:32:18.103871   26899 main.go:141] libmachine: (addons-495659) DBG |   
	I1209 22:32:18.103883   26899 main.go:141] libmachine: (addons-495659) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1209 22:32:18.103898   26899 main.go:141] libmachine: (addons-495659) DBG |     <dhcp>
	I1209 22:32:18.103906   26899 main.go:141] libmachine: (addons-495659) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1209 22:32:18.103915   26899 main.go:141] libmachine: (addons-495659) DBG |     </dhcp>
	I1209 22:32:18.103920   26899 main.go:141] libmachine: (addons-495659) DBG |   </ip>
	I1209 22:32:18.103927   26899 main.go:141] libmachine: (addons-495659) DBG |   
	I1209 22:32:18.103933   26899 main.go:141] libmachine: (addons-495659) DBG | </network>
	I1209 22:32:18.103940   26899 main.go:141] libmachine: (addons-495659) DBG | 
	I1209 22:32:18.110236   26899 main.go:141] libmachine: (addons-495659) DBG | trying to create private KVM network mk-addons-495659 192.168.39.0/24...
	I1209 22:32:18.172448   26899 main.go:141] libmachine: (addons-495659) DBG | private KVM network mk-addons-495659 192.168.39.0/24 created
	I1209 22:32:18.172478   26899 main.go:141] libmachine: (addons-495659) Setting up store path in /home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659 ...
	I1209 22:32:18.172534   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:18.172450   26921 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:32:18.172573   26899 main.go:141] libmachine: (addons-495659) Building disk image from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 22:32:18.172597   26899 main.go:141] libmachine: (addons-495659) Downloading /home/jenkins/minikube-integration/19888-18950/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 22:32:18.422402   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:18.422252   26921 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa...
	I1209 22:32:18.636573   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:18.636427   26921 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/addons-495659.rawdisk...
	I1209 22:32:18.636606   26899 main.go:141] libmachine: (addons-495659) DBG | Writing magic tar header
	I1209 22:32:18.636617   26899 main.go:141] libmachine: (addons-495659) DBG | Writing SSH key tar header
	I1209 22:32:18.636626   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:18.636538   26921 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659 ...
	I1209 22:32:18.636644   26899 main.go:141] libmachine: (addons-495659) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659
	I1209 22:32:18.636686   26899 main.go:141] libmachine: (addons-495659) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659 (perms=drwx------)
	I1209 22:32:18.636699   26899 main.go:141] libmachine: (addons-495659) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines (perms=drwxr-xr-x)
	I1209 22:32:18.636710   26899 main.go:141] libmachine: (addons-495659) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines
	I1209 22:32:18.636724   26899 main.go:141] libmachine: (addons-495659) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:32:18.636731   26899 main.go:141] libmachine: (addons-495659) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950
	I1209 22:32:18.636740   26899 main.go:141] libmachine: (addons-495659) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube (perms=drwxr-xr-x)
	I1209 22:32:18.636752   26899 main.go:141] libmachine: (addons-495659) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950 (perms=drwxrwxr-x)
	I1209 22:32:18.636758   26899 main.go:141] libmachine: (addons-495659) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 22:32:18.636770   26899 main.go:141] libmachine: (addons-495659) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 22:32:18.636782   26899 main.go:141] libmachine: (addons-495659) Creating domain...
	I1209 22:32:18.636791   26899 main.go:141] libmachine: (addons-495659) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 22:32:18.636801   26899 main.go:141] libmachine: (addons-495659) DBG | Checking permissions on dir: /home/jenkins
	I1209 22:32:18.636809   26899 main.go:141] libmachine: (addons-495659) DBG | Checking permissions on dir: /home
	I1209 22:32:18.636820   26899 main.go:141] libmachine: (addons-495659) DBG | Skipping /home - not owner
	I1209 22:32:18.637858   26899 main.go:141] libmachine: (addons-495659) define libvirt domain using xml: 
	I1209 22:32:18.637882   26899 main.go:141] libmachine: (addons-495659) <domain type='kvm'>
	I1209 22:32:18.637892   26899 main.go:141] libmachine: (addons-495659)   <name>addons-495659</name>
	I1209 22:32:18.637900   26899 main.go:141] libmachine: (addons-495659)   <memory unit='MiB'>4000</memory>
	I1209 22:32:18.637909   26899 main.go:141] libmachine: (addons-495659)   <vcpu>2</vcpu>
	I1209 22:32:18.637915   26899 main.go:141] libmachine: (addons-495659)   <features>
	I1209 22:32:18.637927   26899 main.go:141] libmachine: (addons-495659)     <acpi/>
	I1209 22:32:18.637936   26899 main.go:141] libmachine: (addons-495659)     <apic/>
	I1209 22:32:18.637945   26899 main.go:141] libmachine: (addons-495659)     <pae/>
	I1209 22:32:18.637956   26899 main.go:141] libmachine: (addons-495659)     
	I1209 22:32:18.637968   26899 main.go:141] libmachine: (addons-495659)   </features>
	I1209 22:32:18.637976   26899 main.go:141] libmachine: (addons-495659)   <cpu mode='host-passthrough'>
	I1209 22:32:18.637984   26899 main.go:141] libmachine: (addons-495659)   
	I1209 22:32:18.637991   26899 main.go:141] libmachine: (addons-495659)   </cpu>
	I1209 22:32:18.638003   26899 main.go:141] libmachine: (addons-495659)   <os>
	I1209 22:32:18.638011   26899 main.go:141] libmachine: (addons-495659)     <type>hvm</type>
	I1209 22:32:18.638019   26899 main.go:141] libmachine: (addons-495659)     <boot dev='cdrom'/>
	I1209 22:32:18.638033   26899 main.go:141] libmachine: (addons-495659)     <boot dev='hd'/>
	I1209 22:32:18.638047   26899 main.go:141] libmachine: (addons-495659)     <bootmenu enable='no'/>
	I1209 22:32:18.638057   26899 main.go:141] libmachine: (addons-495659)   </os>
	I1209 22:32:18.638067   26899 main.go:141] libmachine: (addons-495659)   <devices>
	I1209 22:32:18.638078   26899 main.go:141] libmachine: (addons-495659)     <disk type='file' device='cdrom'>
	I1209 22:32:18.638095   26899 main.go:141] libmachine: (addons-495659)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/boot2docker.iso'/>
	I1209 22:32:18.638110   26899 main.go:141] libmachine: (addons-495659)       <target dev='hdc' bus='scsi'/>
	I1209 22:32:18.638122   26899 main.go:141] libmachine: (addons-495659)       <readonly/>
	I1209 22:32:18.638132   26899 main.go:141] libmachine: (addons-495659)     </disk>
	I1209 22:32:18.638143   26899 main.go:141] libmachine: (addons-495659)     <disk type='file' device='disk'>
	I1209 22:32:18.638155   26899 main.go:141] libmachine: (addons-495659)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 22:32:18.638171   26899 main.go:141] libmachine: (addons-495659)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/addons-495659.rawdisk'/>
	I1209 22:32:18.638186   26899 main.go:141] libmachine: (addons-495659)       <target dev='hda' bus='virtio'/>
	I1209 22:32:18.638198   26899 main.go:141] libmachine: (addons-495659)     </disk>
	I1209 22:32:18.638208   26899 main.go:141] libmachine: (addons-495659)     <interface type='network'>
	I1209 22:32:18.638220   26899 main.go:141] libmachine: (addons-495659)       <source network='mk-addons-495659'/>
	I1209 22:32:18.638230   26899 main.go:141] libmachine: (addons-495659)       <model type='virtio'/>
	I1209 22:32:18.638242   26899 main.go:141] libmachine: (addons-495659)     </interface>
	I1209 22:32:18.638257   26899 main.go:141] libmachine: (addons-495659)     <interface type='network'>
	I1209 22:32:18.638269   26899 main.go:141] libmachine: (addons-495659)       <source network='default'/>
	I1209 22:32:18.638279   26899 main.go:141] libmachine: (addons-495659)       <model type='virtio'/>
	I1209 22:32:18.638290   26899 main.go:141] libmachine: (addons-495659)     </interface>
	I1209 22:32:18.638300   26899 main.go:141] libmachine: (addons-495659)     <serial type='pty'>
	I1209 22:32:18.638312   26899 main.go:141] libmachine: (addons-495659)       <target port='0'/>
	I1209 22:32:18.638322   26899 main.go:141] libmachine: (addons-495659)     </serial>
	I1209 22:32:18.638344   26899 main.go:141] libmachine: (addons-495659)     <console type='pty'>
	I1209 22:32:18.638359   26899 main.go:141] libmachine: (addons-495659)       <target type='serial' port='0'/>
	I1209 22:32:18.638366   26899 main.go:141] libmachine: (addons-495659)     </console>
	I1209 22:32:18.638370   26899 main.go:141] libmachine: (addons-495659)     <rng model='virtio'>
	I1209 22:32:18.638380   26899 main.go:141] libmachine: (addons-495659)       <backend model='random'>/dev/random</backend>
	I1209 22:32:18.638386   26899 main.go:141] libmachine: (addons-495659)     </rng>
	I1209 22:32:18.638391   26899 main.go:141] libmachine: (addons-495659)     
	I1209 22:32:18.638398   26899 main.go:141] libmachine: (addons-495659)     
	I1209 22:32:18.638403   26899 main.go:141] libmachine: (addons-495659)   </devices>
	I1209 22:32:18.638409   26899 main.go:141] libmachine: (addons-495659) </domain>
	I1209 22:32:18.638416   26899 main.go:141] libmachine: (addons-495659) 
	I1209 22:32:18.645091   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:03:36:99 in network default
	I1209 22:32:18.645626   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:18.645643   26899 main.go:141] libmachine: (addons-495659) Ensuring networks are active...
	I1209 22:32:18.646336   26899 main.go:141] libmachine: (addons-495659) Ensuring network default is active
	I1209 22:32:18.646660   26899 main.go:141] libmachine: (addons-495659) Ensuring network mk-addons-495659 is active
	I1209 22:32:18.647138   26899 main.go:141] libmachine: (addons-495659) Getting domain xml...
	I1209 22:32:18.647786   26899 main.go:141] libmachine: (addons-495659) Creating domain...
	I1209 22:32:20.055916   26899 main.go:141] libmachine: (addons-495659) Waiting to get IP...
	I1209 22:32:20.056613   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:20.056997   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:20.057036   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:20.056986   26921 retry.go:31] will retry after 218.738592ms: waiting for machine to come up
	I1209 22:32:20.277370   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:20.277796   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:20.277826   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:20.277737   26921 retry.go:31] will retry after 267.521853ms: waiting for machine to come up
	I1209 22:32:20.547141   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:20.547641   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:20.547662   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:20.547600   26921 retry.go:31] will retry after 327.553235ms: waiting for machine to come up
	I1209 22:32:20.876946   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:20.877395   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:20.877434   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:20.877339   26921 retry.go:31] will retry after 499.585414ms: waiting for machine to come up
	I1209 22:32:21.379044   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:21.379460   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:21.379502   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:21.379429   26921 retry.go:31] will retry after 626.096312ms: waiting for machine to come up
	I1209 22:32:22.007279   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:22.007690   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:22.007714   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:22.007659   26921 retry.go:31] will retry after 750.630685ms: waiting for machine to come up
	I1209 22:32:22.759423   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:22.759783   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:22.759816   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:22.759767   26921 retry.go:31] will retry after 1.046969717s: waiting for machine to come up
	I1209 22:32:23.808231   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:23.808619   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:23.808650   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:23.808567   26921 retry.go:31] will retry after 1.386247951s: waiting for machine to come up
	I1209 22:32:25.196568   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:25.196910   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:25.196943   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:25.196852   26921 retry.go:31] will retry after 1.740538424s: waiting for machine to come up
	I1209 22:32:26.939741   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:26.940162   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:26.940194   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:26.940114   26921 retry.go:31] will retry after 1.546303558s: waiting for machine to come up
	I1209 22:32:28.487709   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:28.488106   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:28.488123   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:28.488079   26921 retry.go:31] will retry after 1.978335172s: waiting for machine to come up
	I1209 22:32:30.468778   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:30.469252   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:30.469275   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:30.469181   26921 retry.go:31] will retry after 2.737537028s: waiting for machine to come up
	I1209 22:32:33.208612   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:33.209035   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:33.209067   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:33.208987   26921 retry.go:31] will retry after 3.781811448s: waiting for machine to come up
	I1209 22:32:36.994961   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:36.995294   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find current IP address of domain addons-495659 in network mk-addons-495659
	I1209 22:32:36.995317   26899 main.go:141] libmachine: (addons-495659) DBG | I1209 22:32:36.995265   26921 retry.go:31] will retry after 5.000462753s: waiting for machine to come up
	I1209 22:32:42.000269   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.000667   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has current primary IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.000680   26899 main.go:141] libmachine: (addons-495659) Found IP for machine: 192.168.39.123
	I1209 22:32:42.000692   26899 main.go:141] libmachine: (addons-495659) Reserving static IP address...
	I1209 22:32:42.001024   26899 main.go:141] libmachine: (addons-495659) DBG | unable to find host DHCP lease matching {name: "addons-495659", mac: "52:54:00:b0:9d:b8", ip: "192.168.39.123"} in network mk-addons-495659
	I1209 22:32:42.069062   26899 main.go:141] libmachine: (addons-495659) DBG | Getting to WaitForSSH function...
	I1209 22:32:42.069092   26899 main.go:141] libmachine: (addons-495659) Reserved static IP address: 192.168.39.123
	I1209 22:32:42.069182   26899 main.go:141] libmachine: (addons-495659) Waiting for SSH to be available...
	I1209 22:32:42.071385   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.071764   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:42.071795   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.071895   26899 main.go:141] libmachine: (addons-495659) DBG | Using SSH client type: external
	I1209 22:32:42.071932   26899 main.go:141] libmachine: (addons-495659) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa (-rw-------)
	I1209 22:32:42.071970   26899 main.go:141] libmachine: (addons-495659) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 22:32:42.071992   26899 main.go:141] libmachine: (addons-495659) DBG | About to run SSH command:
	I1209 22:32:42.072007   26899 main.go:141] libmachine: (addons-495659) DBG | exit 0
	I1209 22:32:42.199368   26899 main.go:141] libmachine: (addons-495659) DBG | SSH cmd err, output: <nil>: 
	I1209 22:32:42.199666   26899 main.go:141] libmachine: (addons-495659) KVM machine creation complete!
	I1209 22:32:42.199897   26899 main.go:141] libmachine: (addons-495659) Calling .GetConfigRaw
	I1209 22:32:42.200410   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:32:42.200605   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:32:42.200761   26899 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 22:32:42.200777   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:32:42.201938   26899 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 22:32:42.201954   26899 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 22:32:42.201962   26899 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 22:32:42.201967   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:42.203942   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.204246   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:42.204277   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.204440   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:42.204607   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.204734   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.204824   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:42.204986   26899 main.go:141] libmachine: Using SSH client type: native
	I1209 22:32:42.205171   26899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1209 22:32:42.205182   26899 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 22:32:42.314867   26899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:32:42.314891   26899 main.go:141] libmachine: Detecting the provisioner...
	I1209 22:32:42.314906   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:42.317469   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.317790   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:42.317813   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.317940   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:42.318102   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.318276   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.318432   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:42.318559   26899 main.go:141] libmachine: Using SSH client type: native
	I1209 22:32:42.318734   26899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1209 22:32:42.318748   26899 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 22:32:42.427966   26899 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 22:32:42.428036   26899 main.go:141] libmachine: found compatible host: buildroot
	I1209 22:32:42.428048   26899 main.go:141] libmachine: Provisioning with buildroot...
	I1209 22:32:42.428062   26899 main.go:141] libmachine: (addons-495659) Calling .GetMachineName
	I1209 22:32:42.428301   26899 buildroot.go:166] provisioning hostname "addons-495659"
	I1209 22:32:42.428323   26899 main.go:141] libmachine: (addons-495659) Calling .GetMachineName
	I1209 22:32:42.428492   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:42.431267   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.431606   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:42.431634   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.431768   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:42.431939   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.432096   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.432211   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:42.432362   26899 main.go:141] libmachine: Using SSH client type: native
	I1209 22:32:42.432521   26899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1209 22:32:42.432533   26899 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-495659 && echo "addons-495659" | sudo tee /etc/hostname
	I1209 22:32:42.556903   26899 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-495659
	
	I1209 22:32:42.556931   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:42.559508   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.559960   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:42.559982   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.560153   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:42.560336   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.560466   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.560572   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:42.560740   26899 main.go:141] libmachine: Using SSH client type: native
	I1209 22:32:42.561003   26899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1209 22:32:42.561022   26899 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-495659' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-495659/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-495659' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 22:32:42.680108   26899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:32:42.680138   26899 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 22:32:42.680162   26899 buildroot.go:174] setting up certificates
	I1209 22:32:42.680173   26899 provision.go:84] configureAuth start
	I1209 22:32:42.680182   26899 main.go:141] libmachine: (addons-495659) Calling .GetMachineName
	I1209 22:32:42.680410   26899 main.go:141] libmachine: (addons-495659) Calling .GetIP
	I1209 22:32:42.682903   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.683152   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:42.683185   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.683334   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:42.685213   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.685510   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:42.685533   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.685668   26899 provision.go:143] copyHostCerts
	I1209 22:32:42.685741   26899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 22:32:42.685861   26899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 22:32:42.685948   26899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 22:32:42.686050   26899 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.addons-495659 san=[127.0.0.1 192.168.39.123 addons-495659 localhost minikube]
	I1209 22:32:42.747855   26899 provision.go:177] copyRemoteCerts
	I1209 22:32:42.747936   26899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 22:32:42.747967   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:42.750538   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.750840   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:42.750866   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.751057   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:42.751228   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.751385   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:42.751539   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:32:42.838865   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 22:32:42.863546   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 22:32:42.886766   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 22:32:42.909892   26899 provision.go:87] duration metric: took 229.705138ms to configureAuth
	I1209 22:32:42.909928   26899 buildroot.go:189] setting minikube options for container-runtime
	I1209 22:32:42.910102   26899 config.go:182] Loaded profile config "addons-495659": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:32:42.910167   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:42.912959   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.913354   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:42.913387   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:42.913489   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:42.913660   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.913812   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:42.913937   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:42.914109   26899 main.go:141] libmachine: Using SSH client type: native
	I1209 22:32:42.914313   26899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1209 22:32:42.914328   26899 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 22:32:43.490681   26899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 22:32:43.490712   26899 main.go:141] libmachine: Checking connection to Docker...
	I1209 22:32:43.490720   26899 main.go:141] libmachine: (addons-495659) Calling .GetURL
	I1209 22:32:43.491968   26899 main.go:141] libmachine: (addons-495659) DBG | Using libvirt version 6000000
	I1209 22:32:43.494426   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.494757   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:43.494788   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.494971   26899 main.go:141] libmachine: Docker is up and running!
	I1209 22:32:43.494987   26899 main.go:141] libmachine: Reticulating splines...
	I1209 22:32:43.494994   26899 client.go:171] duration metric: took 25.656022047s to LocalClient.Create
	I1209 22:32:43.495018   26899 start.go:167] duration metric: took 25.656080767s to libmachine.API.Create "addons-495659"
	I1209 22:32:43.495029   26899 start.go:293] postStartSetup for "addons-495659" (driver="kvm2")
	I1209 22:32:43.495039   26899 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 22:32:43.495056   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:32:43.495313   26899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 22:32:43.495343   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:43.497691   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.498030   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:43.498058   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.498145   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:43.498352   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:43.498491   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:43.498617   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:32:43.581675   26899 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 22:32:43.585759   26899 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 22:32:43.585787   26899 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 22:32:43.585875   26899 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 22:32:43.585915   26899 start.go:296] duration metric: took 90.879017ms for postStartSetup
	I1209 22:32:43.585959   26899 main.go:141] libmachine: (addons-495659) Calling .GetConfigRaw
	I1209 22:32:43.586578   26899 main.go:141] libmachine: (addons-495659) Calling .GetIP
	I1209 22:32:43.588956   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.589265   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:43.589299   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.589574   26899 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/config.json ...
	I1209 22:32:43.589744   26899 start.go:128] duration metric: took 25.769621738s to createHost
	I1209 22:32:43.589765   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:43.591744   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.591994   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:43.592026   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.592155   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:43.592302   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:43.592435   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:43.592546   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:43.592662   26899 main.go:141] libmachine: Using SSH client type: native
	I1209 22:32:43.592797   26899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1209 22:32:43.592807   26899 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 22:32:43.700054   26899 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733783563.677358996
	
	I1209 22:32:43.700081   26899 fix.go:216] guest clock: 1733783563.677358996
	I1209 22:32:43.700092   26899 fix.go:229] Guest: 2024-12-09 22:32:43.677358996 +0000 UTC Remote: 2024-12-09 22:32:43.589755063 +0000 UTC m=+25.873246812 (delta=87.603933ms)
	I1209 22:32:43.700117   26899 fix.go:200] guest clock delta is within tolerance: 87.603933ms
	I1209 22:32:43.700123   26899 start.go:83] releasing machines lock for "addons-495659", held for 25.88007683s
	I1209 22:32:43.700140   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:32:43.700420   26899 main.go:141] libmachine: (addons-495659) Calling .GetIP
	I1209 22:32:43.703900   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.704299   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:43.704339   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.704541   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:32:43.705010   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:32:43.705202   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:32:43.705294   26899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 22:32:43.705342   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:43.705406   26899 ssh_runner.go:195] Run: cat /version.json
	I1209 22:32:43.705429   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:32:43.708029   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.708077   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.708458   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:43.708496   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.708526   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:43.708544   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:43.708611   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:43.708740   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:43.708812   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:32:43.708861   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:43.708905   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:32:43.708988   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:32:43.709293   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:32:43.711714   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:32:43.810569   26899 ssh_runner.go:195] Run: systemctl --version
	I1209 22:32:43.816391   26899 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 22:32:43.967820   26899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 22:32:43.974148   26899 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 22:32:43.974211   26899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 22:32:43.990164   26899 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 22:32:43.990195   26899 start.go:495] detecting cgroup driver to use...
	I1209 22:32:43.990275   26899 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 22:32:44.006521   26899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 22:32:44.020521   26899 docker.go:217] disabling cri-docker service (if available) ...
	I1209 22:32:44.020569   26899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 22:32:44.033780   26899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 22:32:44.047298   26899 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 22:32:44.173171   26899 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 22:32:44.323153   26899 docker.go:233] disabling docker service ...
	I1209 22:32:44.323221   26899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 22:32:44.336858   26899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 22:32:44.348812   26899 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 22:32:44.477437   26899 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 22:32:44.603757   26899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 22:32:44.617546   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 22:32:44.635588   26899 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 22:32:44.635644   26899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:32:44.645750   26899 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 22:32:44.645816   26899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:32:44.656358   26899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:32:44.666635   26899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:32:44.677009   26899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 22:32:44.687553   26899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:32:44.698069   26899 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:32:44.715646   26899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:32:44.726196   26899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 22:32:44.736653   26899 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 22:32:44.736714   26899 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 22:32:44.749747   26899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 22:32:44.759915   26899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:32:44.875504   26899 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 22:32:44.962725   26899 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 22:32:44.962819   26899 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 22:32:44.967010   26899 start.go:563] Will wait 60s for crictl version
	I1209 22:32:44.967079   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:32:44.970473   26899 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 22:32:45.005670   26899 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 22:32:45.005799   26899 ssh_runner.go:195] Run: crio --version
	I1209 22:32:45.031425   26899 ssh_runner.go:195] Run: crio --version
	I1209 22:32:45.061238   26899 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 22:32:45.062528   26899 main.go:141] libmachine: (addons-495659) Calling .GetIP
	I1209 22:32:45.065253   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:45.065585   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:32:45.065612   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:32:45.065840   26899 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 22:32:45.069902   26899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:32:45.081994   26899 kubeadm.go:883] updating cluster {Name:addons-495659 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-495659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 22:32:45.082095   26899 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:32:45.082134   26899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 22:32:45.112172   26899 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 22:32:45.112237   26899 ssh_runner.go:195] Run: which lz4
	I1209 22:32:45.116006   26899 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 22:32:45.119953   26899 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 22:32:45.119989   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 22:32:46.427220   26899 crio.go:462] duration metric: took 1.311237852s to copy over tarball
	I1209 22:32:46.427299   26899 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 22:32:48.532533   26899 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.105197283s)
	I1209 22:32:48.532567   26899 crio.go:469] duration metric: took 2.105318924s to extract the tarball
	I1209 22:32:48.532577   26899 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 22:32:48.568777   26899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 22:32:48.609560   26899 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 22:32:48.609585   26899 cache_images.go:84] Images are preloaded, skipping loading
	I1209 22:32:48.609593   26899 kubeadm.go:934] updating node { 192.168.39.123 8443 v1.31.2 crio true true} ...
	I1209 22:32:48.609680   26899 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-495659 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-495659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 22:32:48.609741   26899 ssh_runner.go:195] Run: crio config
	I1209 22:32:48.653920   26899 cni.go:84] Creating CNI manager for ""
	I1209 22:32:48.653941   26899 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 22:32:48.653950   26899 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 22:32:48.653971   26899 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-495659 NodeName:addons-495659 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 22:32:48.654093   26899 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-495659"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.123"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.123"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 22:32:48.654149   26899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 22:32:48.663495   26899 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 22:32:48.663553   26899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 22:32:48.672464   26899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1209 22:32:48.688371   26899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 22:32:48.703824   26899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1209 22:32:48.719253   26899 ssh_runner.go:195] Run: grep 192.168.39.123	control-plane.minikube.internal$ /etc/hosts
	I1209 22:32:48.722742   26899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:32:48.734207   26899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:32:48.854440   26899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:32:48.870198   26899 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659 for IP: 192.168.39.123
	I1209 22:32:48.870219   26899 certs.go:194] generating shared ca certs ...
	I1209 22:32:48.870234   26899 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:48.870374   26899 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 22:32:49.026650   26899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt ...
	I1209 22:32:49.026677   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt: {Name:mk4aa8b3303014e859b905619dc713a14f47f0e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.026883   26899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key ...
	I1209 22:32:49.026899   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key: {Name:mkbe7959b01763460b891869efeaaa7c0b172380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.027002   26899 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 22:32:49.241735   26899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt ...
	I1209 22:32:49.241762   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt: {Name:mk3e1969c35e6866f4a16c819226d1b93c596515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.241936   26899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key ...
	I1209 22:32:49.241952   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key: {Name:mkd0e37343ceab4470419b978e0c2bf516f2ce3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.242048   26899 certs.go:256] generating profile certs ...
	I1209 22:32:49.242103   26899 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.key
	I1209 22:32:49.242121   26899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt with IP's: []
	I1209 22:32:49.315927   26899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt ...
	I1209 22:32:49.315955   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: {Name:mk6648f9e363648de05d75c6d2e1f1f684328858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.316136   26899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.key ...
	I1209 22:32:49.316150   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.key: {Name:mkca584ccbeb9a0c57dc763d42d17d4460c01326 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.316250   26899 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.key.9923c7c7
	I1209 22:32:49.316270   26899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.crt.9923c7c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.123]
	I1209 22:32:49.550866   26899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.crt.9923c7c7 ...
	I1209 22:32:49.550900   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.crt.9923c7c7: {Name:mkd530a8155bdf0b4e134bca78d52b27af7b4494 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.551072   26899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.key.9923c7c7 ...
	I1209 22:32:49.551086   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.key.9923c7c7: {Name:mkedf1f3966ad4142cff91543bde56963440fb0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.551158   26899 certs.go:381] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.crt.9923c7c7 -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.crt
	I1209 22:32:49.551228   26899 certs.go:385] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.key.9923c7c7 -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.key
	I1209 22:32:49.551279   26899 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/proxy-client.key
	I1209 22:32:49.551296   26899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/proxy-client.crt with IP's: []
	I1209 22:32:49.628687   26899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/proxy-client.crt ...
	I1209 22:32:49.628720   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/proxy-client.crt: {Name:mkcb41c20ca8b63d7740fdda3154dd5f0e5349bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.628878   26899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/proxy-client.key ...
	I1209 22:32:49.628890   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/proxy-client.key: {Name:mkb437074ff7e34ee4861308f77de157585ee72b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:32:49.629071   26899 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 22:32:49.629110   26899 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 22:32:49.629138   26899 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 22:32:49.629166   26899 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 22:32:49.629720   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 22:32:49.665502   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 22:32:49.694439   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 22:32:49.717759   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 22:32:49.741036   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 22:32:49.763881   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 22:32:49.787209   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 22:32:49.814492   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 22:32:49.837585   26899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 22:32:49.859783   26899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 22:32:49.875336   26899 ssh_runner.go:195] Run: openssl version
	I1209 22:32:49.880977   26899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 22:32:49.891205   26899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:32:49.895458   26899 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:32:49.895538   26899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:32:49.900955   26899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 22:32:49.911164   26899 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 22:32:49.914973   26899 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 22:32:49.915018   26899 kubeadm.go:392] StartCluster: {Name:addons-495659 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-495659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:32:49.915082   26899 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 22:32:49.915125   26899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 22:32:49.948885   26899 cri.go:89] found id: ""
	I1209 22:32:49.948949   26899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 22:32:49.959467   26899 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 22:32:49.969824   26899 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 22:32:49.981144   26899 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 22:32:49.981168   26899 kubeadm.go:157] found existing configuration files:
	
	I1209 22:32:49.981217   26899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 22:32:49.991130   26899 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 22:32:49.991201   26899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 22:32:50.001381   26899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 22:32:50.010909   26899 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 22:32:50.010985   26899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 22:32:50.021248   26899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 22:32:50.031081   26899 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 22:32:50.031142   26899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 22:32:50.041284   26899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 22:32:50.050323   26899 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 22:32:50.050399   26899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 22:32:50.059701   26899 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 22:32:50.106288   26899 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 22:32:50.106523   26899 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 22:32:50.208955   26899 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 22:32:50.209097   26899 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 22:32:50.209231   26899 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 22:32:50.217240   26899 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 22:32:50.452949   26899 out.go:235]   - Generating certificates and keys ...
	I1209 22:32:50.453092   26899 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 22:32:50.453173   26899 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 22:32:50.453273   26899 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 22:32:50.504259   26899 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 22:32:50.595277   26899 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 22:32:50.745501   26899 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 22:32:50.802392   26899 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 22:32:50.802686   26899 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-495659 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
	I1209 22:32:50.912616   26899 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 22:32:50.912842   26899 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-495659 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
	I1209 22:32:51.353200   26899 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 22:32:51.713825   26899 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 22:32:51.876601   26899 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 22:32:51.876718   26899 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 22:32:52.036728   26899 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 22:32:52.259193   26899 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 22:32:52.501461   26899 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 22:32:52.572835   26899 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 22:32:52.806351   26899 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 22:32:52.806852   26899 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 22:32:52.809192   26899 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 22:32:52.810736   26899 out.go:235]   - Booting up control plane ...
	I1209 22:32:52.810824   26899 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 22:32:52.810919   26899 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 22:32:52.811044   26899 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 22:32:52.826015   26899 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 22:32:52.831761   26899 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 22:32:52.831832   26899 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 22:32:52.953360   26899 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 22:32:52.953454   26899 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 22:32:53.454951   26899 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.80586ms
	I1209 22:32:53.455044   26899 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 22:32:58.454879   26899 kubeadm.go:310] [api-check] The API server is healthy after 5.001388205s
	I1209 22:32:58.465168   26899 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 22:32:58.483322   26899 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 22:32:58.506203   26899 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 22:32:58.506452   26899 kubeadm.go:310] [mark-control-plane] Marking the node addons-495659 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 22:32:58.519808   26899 kubeadm.go:310] [bootstrap-token] Using token: bnekio.9szq1yutbnib956w
	I1209 22:32:58.521388   26899 out.go:235]   - Configuring RBAC rules ...
	I1209 22:32:58.521521   26899 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 22:32:58.526140   26899 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 22:32:58.533502   26899 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 22:32:58.537218   26899 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 22:32:58.545907   26899 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 22:32:58.549552   26899 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 22:32:58.863723   26899 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 22:32:59.284238   26899 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 22:32:59.863681   26899 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 22:32:59.864583   26899 kubeadm.go:310] 
	I1209 22:32:59.864640   26899 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 22:32:59.864650   26899 kubeadm.go:310] 
	I1209 22:32:59.864742   26899 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 22:32:59.864753   26899 kubeadm.go:310] 
	I1209 22:32:59.864774   26899 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 22:32:59.864841   26899 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 22:32:59.864909   26899 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 22:32:59.864917   26899 kubeadm.go:310] 
	I1209 22:32:59.864982   26899 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 22:32:59.864992   26899 kubeadm.go:310] 
	I1209 22:32:59.865068   26899 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 22:32:59.865078   26899 kubeadm.go:310] 
	I1209 22:32:59.865152   26899 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 22:32:59.865291   26899 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 22:32:59.865391   26899 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 22:32:59.865401   26899 kubeadm.go:310] 
	I1209 22:32:59.865494   26899 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 22:32:59.865626   26899 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 22:32:59.865637   26899 kubeadm.go:310] 
	I1209 22:32:59.865707   26899 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bnekio.9szq1yutbnib956w \
	I1209 22:32:59.865853   26899 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b \
	I1209 22:32:59.865912   26899 kubeadm.go:310] 	--control-plane 
	I1209 22:32:59.865932   26899 kubeadm.go:310] 
	I1209 22:32:59.866054   26899 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 22:32:59.866069   26899 kubeadm.go:310] 
	I1209 22:32:59.866179   26899 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bnekio.9szq1yutbnib956w \
	I1209 22:32:59.866350   26899 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b 
	I1209 22:32:59.866921   26899 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 22:32:59.866953   26899 cni.go:84] Creating CNI manager for ""
	I1209 22:32:59.866964   26899 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 22:32:59.869233   26899 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 22:32:59.870462   26899 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 22:32:59.882769   26899 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 22:32:59.900692   26899 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 22:32:59.900753   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:32:59.900817   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-495659 minikube.k8s.io/updated_at=2024_12_09T22_32_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=addons-495659 minikube.k8s.io/primary=true
	I1209 22:32:59.929367   26899 ops.go:34] apiserver oom_adj: -16
	I1209 22:33:00.011646   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:33:00.512198   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:33:01.011613   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:33:01.512404   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:33:02.011668   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:33:02.512318   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:33:03.011603   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:33:03.512124   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:33:04.012635   26899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:33:04.111456   26899 kubeadm.go:1113] duration metric: took 4.210764385s to wait for elevateKubeSystemPrivileges
	I1209 22:33:04.111492   26899 kubeadm.go:394] duration metric: took 14.196477075s to StartCluster
	I1209 22:33:04.111509   26899 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:33:04.111660   26899 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:33:04.112032   26899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:33:04.112218   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 22:33:04.112244   26899 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:33:04.112297   26899 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1209 22:33:04.112387   26899 addons.go:69] Setting yakd=true in profile "addons-495659"
	I1209 22:33:04.112406   26899 addons.go:234] Setting addon yakd=true in "addons-495659"
	I1209 22:33:04.112441   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.112455   26899 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-495659"
	I1209 22:33:04.112466   26899 config.go:182] Loaded profile config "addons-495659": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:33:04.112480   26899 addons.go:69] Setting cloud-spanner=true in profile "addons-495659"
	I1209 22:33:04.112491   26899 addons.go:234] Setting addon cloud-spanner=true in "addons-495659"
	I1209 22:33:04.112471   26899 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-495659"
	I1209 22:33:04.112526   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.112558   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.112614   26899 addons.go:69] Setting storage-provisioner=true in profile "addons-495659"
	I1209 22:33:04.112635   26899 addons.go:234] Setting addon storage-provisioner=true in "addons-495659"
	I1209 22:33:04.112663   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.112726   26899 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-495659"
	I1209 22:33:04.112747   26899 addons.go:69] Setting registry=true in profile "addons-495659"
	I1209 22:33:04.112803   26899 addons.go:69] Setting gcp-auth=true in profile "addons-495659"
	I1209 22:33:04.112807   26899 addons.go:234] Setting addon registry=true in "addons-495659"
	I1209 22:33:04.112834   26899 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-495659"
	I1209 22:33:04.112881   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.112911   26899 addons.go:69] Setting ingress=true in profile "addons-495659"
	I1209 22:33:04.112917   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.112929   26899 addons.go:69] Setting ingress-dns=true in profile "addons-495659"
	I1209 22:33:04.112759   26899 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-495659"
	I1209 22:33:04.112941   26899 addons.go:234] Setting addon ingress-dns=true in "addons-495659"
	I1209 22:33:04.112954   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.112956   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.112970   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.112972   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.113005   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.112791   26899 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-495659"
	I1209 22:33:04.113025   26899 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-495659"
	I1209 22:33:04.112826   26899 mustload.go:65] Loading cluster: addons-495659
	I1209 22:33:04.112768   26899 addons.go:69] Setting metrics-server=true in profile "addons-495659"
	I1209 22:33:04.113038   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.113042   26899 addons.go:234] Setting addon metrics-server=true in "addons-495659"
	I1209 22:33:04.112894   26899 addons.go:69] Setting default-storageclass=true in profile "addons-495659"
	I1209 22:33:04.113055   26899 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-495659"
	I1209 22:33:04.112896   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.112782   26899 addons.go:69] Setting volcano=true in profile "addons-495659"
	I1209 22:33:04.113076   26899 addons.go:234] Setting addon volcano=true in "addons-495659"
	I1209 22:33:04.113093   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.113100   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.112429   26899 addons.go:69] Setting inspektor-gadget=true in profile "addons-495659"
	I1209 22:33:04.113142   26899 addons.go:234] Setting addon inspektor-gadget=true in "addons-495659"
	I1209 22:33:04.113171   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.112906   26899 addons.go:69] Setting volumesnapshots=true in profile "addons-495659"
	I1209 22:33:04.113294   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.113299   26899 addons.go:234] Setting addon volumesnapshots=true in "addons-495659"
	I1209 22:33:04.112883   26899 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-495659"
	I1209 22:33:04.112930   26899 addons.go:234] Setting addon ingress=true in "addons-495659"
	I1209 22:33:04.113321   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.113340   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.113390   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.113058   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.113447   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.113474   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.113511   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.113538   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.113604   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.113705   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.113769   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.113791   26899 config.go:182] Loaded profile config "addons-495659": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:33:04.113799   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.113963   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.113994   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.114044   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.114074   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.114277   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.114312   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.113804   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.114694   26899 out.go:177] * Verifying Kubernetes components...
	I1209 22:33:04.113606   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.114820   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.113814   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.116079   26899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:33:04.131975   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.132025   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.135503   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.135530   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41703
	I1209 22:33:04.135554   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.135514   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34895
	I1209 22:33:04.135872   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.135906   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.136153   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.136285   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.136742   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.136762   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.136745   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.136816   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.136892   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36641
	I1209 22:33:04.137130   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.137180   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.137737   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.137774   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.137994   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38731
	I1209 22:33:04.138161   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I1209 22:33:04.138323   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.138360   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.138491   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.138631   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.139039   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.139060   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.139331   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.139348   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.139369   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.139439   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.139893   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.139910   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.139958   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.140001   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.140434   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.140957   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.140989   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.146754   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36983
	I1209 22:33:04.148201   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.148854   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.148909   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.164241   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.164882   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.164902   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.165253   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.165787   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.165824   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.174183   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41079
	I1209 22:33:04.174871   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.175546   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.175582   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.175967   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.176141   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.176973   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42827
	I1209 22:33:04.177508   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.178447   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.178472   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.179014   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.179378   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.180325   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44739
	I1209 22:33:04.181735   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.182206   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.182224   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.182645   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.183249   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.183294   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.183637   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37525
	I1209 22:33:04.184047   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43131
	I1209 22:33:04.184283   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.184399   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.185105   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.185128   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.185447   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.185650   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.185765   26899 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-495659"
	I1209 22:33:04.185808   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.186477   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
	I1209 22:33:04.187148   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.187148   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.187636   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.187654   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.187857   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.188011   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.188050   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.188229   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.188260   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.188502   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
	I1209 22:33:04.189020   26899 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1209 22:33:04.189277   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I1209 22:33:04.189473   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.189980   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.190003   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.190311   26899 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 22:33:04.190331   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1209 22:33:04.190349   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.190371   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.190408   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.190423   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46119
	I1209 22:33:04.191031   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.191073   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.191120   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.191132   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.191193   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.191216   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.191629   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.191674   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.191772   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.192189   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.192226   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.192755   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.192925   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.192962   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.194735   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36847
	I1209 22:33:04.194837   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.194852   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.195270   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.195455   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.195958   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.196536   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.196554   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.196812   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.197009   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.197255   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.197430   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.197798   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.198607   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.199058   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.199943   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.200392   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39737
	I1209 22:33:04.201159   26899 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1209 22:33:04.201187   26899 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1209 22:33:04.201267   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.202889   26899 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1209 22:33:04.202986   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I1209 22:33:04.202998   26899 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 22:33:04.203014   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1209 22:33:04.203038   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.203195   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.203218   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.203317   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34459
	I1209 22:33:04.203321   26899 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1209 22:33:04.203421   26899 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1209 22:33:04.203440   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.204000   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.204094   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.204237   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.204256   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.204598   26899 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1209 22:33:04.204654   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1209 22:33:04.204749   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.204796   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.204809   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.205296   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.205339   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.204724   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.205455   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.205531   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.205966   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.206478   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.206514   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.207100   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.207120   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.208070   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.208148   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.208726   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.208764   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.208948   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.209507   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.209526   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.209937   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.210104   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.210300   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.210363   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.210384   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.211399   26899 addons.go:234] Setting addon default-storageclass=true in "addons-495659"
	I1209 22:33:04.211433   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:04.211797   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.211825   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.211908   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.212248   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.212385   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.212480   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.212580   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.213912   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.214746   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.214779   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.214962   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.215103   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.215246   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.215371   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.216221   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37409
	I1209 22:33:04.216592   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.217711   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.217733   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.218088   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.218236   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.221537   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44295
	I1209 22:33:04.222395   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.222782   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.223654   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.223671   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.224053   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.224394   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.224643   26899 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1209 22:33:04.226059   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.226267   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:04.226280   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:04.227168   26899 out.go:177]   - Using image docker.io/registry:2.8.3
	I1209 22:33:04.228170   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:04.228172   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:04.228189   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:04.228198   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:04.228204   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:04.228393   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:04.228420   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:04.228430   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	W1209 22:33:04.228505   26899 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1209 22:33:04.228834   26899 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1209 22:33:04.228871   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1209 22:33:04.228893   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.232579   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.232942   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.232968   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.233298   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.233500   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.233676   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.233801   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.237599   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I1209 22:33:04.238016   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39417
	I1209 22:33:04.240028   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38453
	I1209 22:33:04.240056   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.240121   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.240657   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.240677   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.240806   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.240820   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.241234   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.241255   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.241445   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.241479   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35709
	I1209 22:33:04.241448   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.241964   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46381
	I1209 22:33:04.242520   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.242605   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.243128   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.243384   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.243397   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.243480   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.243486   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.243716   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.243986   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.244041   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.244130   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.244142   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.244423   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.244494   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.244537   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.245515   26899 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 22:33:04.245810   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.245840   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.245521   26899 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1209 22:33:04.246030   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.246229   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39941
	I1209 22:33:04.247505   26899 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1209 22:33:04.247538   26899 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1209 22:33:04.247584   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.248209   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.248325   26899 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 22:33:04.249674   26899 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1209 22:33:04.250850   26899 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1209 22:33:04.250859   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41491
	I1209 22:33:04.250922   26899 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1209 22:33:04.250941   26899 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1209 22:33:04.250974   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.251425   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.252043   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.252060   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.252442   26899 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 22:33:04.252457   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1209 22:33:04.252470   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.252473   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.252652   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.252708   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.253237   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.253256   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.253612   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.253675   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34309
	I1209 22:33:04.253911   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.254234   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.254253   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.254734   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.254750   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.254797   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.255416   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.255609   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.256022   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.256170   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.256848   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.256865   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.259683   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.259694   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.259698   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.259706   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I1209 22:33:04.259699   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.259748   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.259762   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.259776   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.259781   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.259799   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.259819   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46429
	I1209 22:33:04.259893   26899 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1209 22:33:04.260155   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.260230   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.260291   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.260447   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.260516   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.260534   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.260569   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.260668   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.260695   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.260707   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.260727   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.260946   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.260963   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.261039   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.261287   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.261308   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.261481   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.261627   26899 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 22:33:04.261637   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1209 22:33:04.261647   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.261808   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:04.261848   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:04.263242   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.263774   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.264771   26899 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 22:33:04.265412   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.265477   26899 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1209 22:33:04.265849   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.265963   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.266131   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.266294   26899 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 22:33:04.266319   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 22:33:04.266322   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.266334   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.266460   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.266589   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.267059   26899 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 22:33:04.267076   26899 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 22:33:04.267087   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.270114   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.270598   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.270916   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.270937   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.271000   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.271014   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.271090   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.271190   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.271240   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.271327   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.271375   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.271586   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.271624   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.271765   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	W1209 22:33:04.272678   26899 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44612->192.168.39.123:22: read: connection reset by peer
	I1209 22:33:04.272725   26899 retry.go:31] will retry after 219.84003ms: ssh: handshake failed: read tcp 192.168.39.1:44612->192.168.39.123:22: read: connection reset by peer
	I1209 22:33:04.276933   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I1209 22:33:04.277275   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.277755   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.277773   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.278110   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.278278   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.279962   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.282348   26899 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1209 22:33:04.283712   26899 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1209 22:33:04.284153   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45011
	I1209 22:33:04.284460   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
	I1209 22:33:04.284679   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.284855   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:04.285126   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.285144   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.285423   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.285626   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:04.285644   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:04.285663   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.285974   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:04.286179   26899 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1209 22:33:04.286198   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:04.287790   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.287955   26899 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 22:33:04.287966   26899 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 22:33:04.287978   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.288197   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:04.288631   26899 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1209 22:33:04.289494   26899 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1209 22:33:04.290626   26899 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1209 22:33:04.290843   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.291186   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.291201   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.291328   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.291421   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.291493   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.291608   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.291796   26899 out.go:177]   - Using image docker.io/busybox:stable
	I1209 22:33:04.292829   26899 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1209 22:33:04.292934   26899 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 22:33:04.292951   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1209 22:33:04.292968   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.294978   26899 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1209 22:33:04.295621   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.295922   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.295948   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.296107   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.296260   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.296392   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.296528   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:04.297138   26899 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1209 22:33:04.298420   26899 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1209 22:33:04.298440   26899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1209 22:33:04.298461   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:04.301275   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.301756   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:04.301790   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:04.301805   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:04.301975   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:04.302100   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:04.302221   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	W1209 22:33:04.302820   26899 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44632->192.168.39.123:22: read: connection reset by peer
	I1209 22:33:04.302841   26899 retry.go:31] will retry after 165.6651ms: ssh: handshake failed: read tcp 192.168.39.1:44632->192.168.39.123:22: read: connection reset by peer
	I1209 22:33:04.632575   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 22:33:04.635372   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 22:33:04.653637   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1209 22:33:04.666894   26899 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1209 22:33:04.666921   26899 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1209 22:33:04.683757   26899 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1209 22:33:04.683786   26899 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1209 22:33:04.688876   26899 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1209 22:33:04.688906   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1209 22:33:04.695018   26899 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1209 22:33:04.695044   26899 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1209 22:33:04.723045   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 22:33:04.752361   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 22:33:04.773276   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 22:33:04.774559   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 22:33:04.778464   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 22:33:04.783264   26899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:33:04.783623   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 22:33:04.825093   26899 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1209 22:33:04.825124   26899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1209 22:33:04.856705   26899 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 22:33:04.856730   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1209 22:33:04.888729   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1209 22:33:04.907800   26899 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1209 22:33:04.907827   26899 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1209 22:33:04.920498   26899 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1209 22:33:04.920520   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1209 22:33:04.933902   26899 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1209 22:33:04.933923   26899 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1209 22:33:05.020320   26899 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1209 22:33:05.020346   26899 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1209 22:33:05.083200   26899 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 22:33:05.083232   26899 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 22:33:05.088406   26899 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1209 22:33:05.088431   26899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1209 22:33:05.160766   26899 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1209 22:33:05.160793   26899 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1209 22:33:05.208195   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1209 22:33:05.222279   26899 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1209 22:33:05.222305   26899 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1209 22:33:05.313711   26899 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 22:33:05.313738   26899 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 22:33:05.346115   26899 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1209 22:33:05.346138   26899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1209 22:33:05.434997   26899 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1209 22:33:05.435018   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1209 22:33:05.500964   26899 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 22:33:05.500992   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1209 22:33:05.556713   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 22:33:05.596976   26899 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1209 22:33:05.596998   26899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1209 22:33:05.646393   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 22:33:05.653778   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1209 22:33:05.873628   26899 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1209 22:33:05.873660   26899 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1209 22:33:06.257997   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.625381642s)
	I1209 22:33:06.258067   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:06.258079   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:06.258451   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:06.258477   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:06.258492   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:06.258503   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:06.258515   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:06.258779   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:06.258808   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:06.258782   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:06.274721   26899 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1209 22:33:06.274748   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1209 22:33:06.520818   26899 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1209 22:33:06.520850   26899 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1209 22:33:07.026991   26899 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1209 22:33:07.027021   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1209 22:33:07.130042   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.494633829s)
	I1209 22:33:07.130102   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:07.130114   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:07.130417   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:07.130433   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:07.130442   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:07.130449   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:07.130685   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:07.130699   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:07.468994   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.815316492s)
	I1209 22:33:07.469043   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.745962887s)
	I1209 22:33:07.469048   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:07.469081   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:07.469133   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:07.469211   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:07.469527   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:07.469540   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:07.469529   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:07.469557   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:07.469548   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:07.469578   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:07.469579   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:07.469586   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:07.469641   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:07.469768   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:07.469777   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:07.469838   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:07.469846   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:07.554211   26899 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1209 22:33:07.554243   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1209 22:33:07.862725   26899 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 22:33:07.862760   26899 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1209 22:33:08.173532   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 22:33:11.263328   26899 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1209 22:33:11.263367   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:11.266303   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:11.266710   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:11.266740   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:11.266894   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:11.267093   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:11.267248   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:11.267396   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:11.597008   26899 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1209 22:33:11.805212   26899 addons.go:234] Setting addon gcp-auth=true in "addons-495659"
	I1209 22:33:11.805268   26899 host.go:66] Checking if "addons-495659" exists ...
	I1209 22:33:11.805621   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:11.805702   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:11.821217   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39515
	I1209 22:33:11.821739   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:11.822214   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:11.822234   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:11.822533   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:11.823038   26899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:33:11.823075   26899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:33:11.838084   26899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46635
	I1209 22:33:11.838576   26899 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:33:11.839110   26899 main.go:141] libmachine: Using API Version  1
	I1209 22:33:11.839129   26899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:33:11.839483   26899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:33:11.839699   26899 main.go:141] libmachine: (addons-495659) Calling .GetState
	I1209 22:33:11.841181   26899 main.go:141] libmachine: (addons-495659) Calling .DriverName
	I1209 22:33:11.841407   26899 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1209 22:33:11.841432   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHHostname
	I1209 22:33:11.843959   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:11.844352   26899 main.go:141] libmachine: (addons-495659) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:9d:b8", ip: ""} in network mk-addons-495659: {Iface:virbr1 ExpiryTime:2024-12-09 23:32:32 +0000 UTC Type:0 Mac:52:54:00:b0:9d:b8 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-495659 Clientid:01:52:54:00:b0:9d:b8}
	I1209 22:33:11.844384   26899 main.go:141] libmachine: (addons-495659) DBG | domain addons-495659 has defined IP address 192.168.39.123 and MAC address 52:54:00:b0:9d:b8 in network mk-addons-495659
	I1209 22:33:11.844511   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHPort
	I1209 22:33:11.844666   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHKeyPath
	I1209 22:33:11.844806   26899 main.go:141] libmachine: (addons-495659) Calling .GetSSHUsername
	I1209 22:33:11.844917   26899 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/addons-495659/id_rsa Username:docker}
	I1209 22:33:12.227715   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.47531758s)
	I1209 22:33:12.227771   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.227783   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.227816   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.454495457s)
	I1209 22:33:12.227863   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.227875   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.227884   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.453299791s)
	I1209 22:33:12.227915   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.227929   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.227985   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.449496234s)
	I1209 22:33:12.228002   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.228010   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.228012   26899 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.444717406s)
	I1209 22:33:12.228033   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.228046   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.228054   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.228061   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.228072   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.228082   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.228092   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.228099   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.228171   26899 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.444504237s)
	I1209 22:33:12.228193   26899 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1209 22:33:12.228396   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.228432   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.228439   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.228449   26899 addons.go:475] Verifying addon ingress=true in "addons-495659"
	I1209 22:33:12.228669   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.228685   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.228694   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.228702   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.229073   26899 node_ready.go:35] waiting up to 6m0s for node "addons-495659" to be "Ready" ...
	I1209 22:33:12.229259   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.229285   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.229292   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.229428   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.229450   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.229457   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.230155   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.230170   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.230177   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.230184   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.230347   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.341589189s)
	I1209 22:33:12.230372   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.230380   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.230415   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.230434   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.230437   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.230507   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.673768887s)
	I1209 22:33:12.230533   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.230544   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.230665   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.230676   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.230680   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.584253582s)
	W1209 22:33:12.230711   26899 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 22:33:12.230749   26899 retry.go:31] will retry after 202.79381ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 22:33:12.230684   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.230775   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.230809   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.577004101s)
	I1209 22:33:12.230435   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.022206507s)
	I1209 22:33:12.230829   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.230838   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.230848   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.230840   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.231250   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.231271   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.231295   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.231301   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.231308   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.231313   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.231364   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.231387   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.231392   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.231590   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.231599   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.231607   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.231614   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.232507   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.232544   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.232551   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.232629   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.232654   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.232686   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.232691   26899 out.go:177] * Verifying ingress addon...
	I1209 22:33:12.232817   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.232832   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.232840   26899 addons.go:475] Verifying addon registry=true in "addons-495659"
	I1209 22:33:12.232693   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.234887   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.234898   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.235233   26899 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-495659 service yakd-dashboard -n yakd-dashboard
	
	I1209 22:33:12.235682   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.235691   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.235703   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.235718   26899 addons.go:475] Verifying addon metrics-server=true in "addons-495659"
	I1209 22:33:12.236051   26899 out.go:177] * Verifying registry addon...
	I1209 22:33:12.236127   26899 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1209 22:33:12.238228   26899 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1209 22:33:12.241606   26899 node_ready.go:49] node "addons-495659" has status "Ready":"True"
	I1209 22:33:12.241631   26899 node_ready.go:38] duration metric: took 12.536019ms for node "addons-495659" to be "Ready" ...
	I1209 22:33:12.241642   26899 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:33:12.268765   26899 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 22:33:12.268793   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:12.268879   26899 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1209 22:33:12.268905   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:12.269982   26899 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-k9c92" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:12.298583   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.298609   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.298888   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.298923   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.298948   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	W1209 22:33:12.299036   26899 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1209 22:33:12.315582   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:12.315608   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:12.316014   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:12.316025   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:12.316042   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:12.434234   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 22:33:12.732924   26899 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-495659" context rescaled to 1 replicas
	I1209 22:33:12.740496   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:12.742534   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:13.249889   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:13.249905   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:13.750357   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:13.750427   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:14.536392   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:14.536479   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:14.564081   26899 pod_ready.go:103] pod "amd-gpu-device-plugin-k9c92" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:14.747552   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.573812865s)
	I1209 22:33:14.747632   26899 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.906201867s)
	I1209 22:33:14.747631   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:14.747791   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:14.748043   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:14.748058   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:14.748068   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:14.748075   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:14.748284   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:14.748299   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:14.748310   26899 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-495659"
	I1209 22:33:14.749358   26899 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 22:33:14.750097   26899 out.go:177] * Verifying csi-hostpath-driver addon...
	I1209 22:33:14.751807   26899 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1209 22:33:14.752514   26899 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1209 22:33:14.752845   26899 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1209 22:33:14.752858   26899 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1209 22:33:14.773230   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:14.773373   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:14.782063   26899 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 22:33:14.782093   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:14.942684   26899 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1209 22:33:14.942705   26899 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1209 22:33:15.001281   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.566997309s)
	I1209 22:33:15.001348   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:15.001363   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:15.001687   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:15.001736   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:15.001751   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:15.001760   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:15.001713   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:15.001971   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:15.002029   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:15.002049   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:15.072529   26899 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 22:33:15.072555   26899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1209 22:33:15.181724   26899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 22:33:15.243240   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:15.243818   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:15.342647   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:15.751688   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:15.751740   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:15.759993   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:16.277133   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:16.279599   26899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.09784483s)
	I1209 22:33:16.279641   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:16.279653   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:16.279928   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:16.279944   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:16.279955   26899 main.go:141] libmachine: Making call to close driver server
	I1209 22:33:16.279962   26899 main.go:141] libmachine: (addons-495659) Calling .Close
	I1209 22:33:16.280313   26899 main.go:141] libmachine: (addons-495659) DBG | Closing plugin on server side
	I1209 22:33:16.280315   26899 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:33:16.280334   26899 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:33:16.281840   26899 addons.go:475] Verifying addon gcp-auth=true in "addons-495659"
	I1209 22:33:16.283310   26899 out.go:177] * Verifying gcp-auth addon...
	I1209 22:33:16.284973   26899 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1209 22:33:16.307731   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:16.308257   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:16.330721   26899 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1209 22:33:16.330741   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:16.741657   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:16.742732   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:16.758055   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:16.781843   26899 pod_ready.go:103] pod "amd-gpu-device-plugin-k9c92" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:16.790249   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:17.240512   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:17.241997   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:17.256178   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:17.288877   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:17.741649   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:17.741984   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:17.757886   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:17.788008   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:18.241912   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:18.243339   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:18.257069   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:18.288322   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:18.945494   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:18.946440   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:18.946554   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:18.946729   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:18.952843   26899 pod_ready.go:103] pod "amd-gpu-device-plugin-k9c92" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:19.240982   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:19.245501   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:19.257288   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:19.287628   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:19.740955   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:19.742827   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:19.757301   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:19.788915   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:20.245857   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:20.246387   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:20.256912   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:20.287920   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:20.740768   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:20.742329   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:20.756488   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:20.788457   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:21.240769   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:21.241799   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:21.259060   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:21.275369   26899 pod_ready.go:103] pod "amd-gpu-device-plugin-k9c92" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:21.287453   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:21.740534   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:21.742082   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:21.756484   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:21.789057   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:22.240395   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:22.242614   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:22.257163   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:22.288721   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:22.741778   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:22.742802   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:22.757860   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:22.788419   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:23.240957   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:23.242278   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:23.256746   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:23.276246   26899 pod_ready.go:93] pod "amd-gpu-device-plugin-k9c92" in "kube-system" namespace has status "Ready":"True"
	I1209 22:33:23.276267   26899 pod_ready.go:82] duration metric: took 11.006264843s for pod "amd-gpu-device-plugin-k9c92" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.276277   26899 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-665x9" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.277715   26899 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-665x9" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-665x9" not found
	I1209 22:33:23.277731   26899 pod_ready.go:82] duration metric: took 1.448647ms for pod "coredns-7c65d6cfc9-665x9" in "kube-system" namespace to be "Ready" ...
	E1209 22:33:23.277739   26899 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-665x9" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-665x9" not found
	I1209 22:33:23.277746   26899 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-d7jm7" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.282626   26899 pod_ready.go:93] pod "coredns-7c65d6cfc9-d7jm7" in "kube-system" namespace has status "Ready":"True"
	I1209 22:33:23.282659   26899 pod_ready.go:82] duration metric: took 4.904458ms for pod "coredns-7c65d6cfc9-d7jm7" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.282672   26899 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-495659" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.287277   26899 pod_ready.go:93] pod "etcd-addons-495659" in "kube-system" namespace has status "Ready":"True"
	I1209 22:33:23.287306   26899 pod_ready.go:82] duration metric: took 4.625929ms for pod "etcd-addons-495659" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.287318   26899 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-495659" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.288086   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:23.291396   26899 pod_ready.go:93] pod "kube-apiserver-addons-495659" in "kube-system" namespace has status "Ready":"True"
	I1209 22:33:23.291413   26899 pod_ready.go:82] duration metric: took 4.085678ms for pod "kube-apiserver-addons-495659" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.291421   26899 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-495659" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.474926   26899 pod_ready.go:93] pod "kube-controller-manager-addons-495659" in "kube-system" namespace has status "Ready":"True"
	I1209 22:33:23.474950   26899 pod_ready.go:82] duration metric: took 183.522974ms for pod "kube-controller-manager-addons-495659" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.474962   26899 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x6vmt" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.742380   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:23.743152   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:23.756321   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:23.789695   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:23.874625   26899 pod_ready.go:93] pod "kube-proxy-x6vmt" in "kube-system" namespace has status "Ready":"True"
	I1209 22:33:23.874655   26899 pod_ready.go:82] duration metric: took 399.68642ms for pod "kube-proxy-x6vmt" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:23.874669   26899 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-495659" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:24.243738   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:24.243860   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:24.257020   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:24.274626   26899 pod_ready.go:93] pod "kube-scheduler-addons-495659" in "kube-system" namespace has status "Ready":"True"
	I1209 22:33:24.274650   26899 pod_ready.go:82] duration metric: took 399.973086ms for pod "kube-scheduler-addons-495659" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:24.274663   26899 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace to be "Ready" ...
	I1209 22:33:24.288805   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:24.742685   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:24.743180   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:24.756732   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:24.788272   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:25.241598   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:25.242455   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:25.256908   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:25.288774   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:25.741277   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:25.743267   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:25.757201   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:25.789714   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:26.241602   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:26.243604   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:26.257076   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:26.282211   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:26.288164   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:26.741439   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:26.741753   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:26.757103   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:26.788170   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:27.463878   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:27.464272   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:27.465030   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:27.465800   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:27.741461   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:27.742329   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:27.756917   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:27.789227   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:28.240918   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:28.242263   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:28.256808   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:28.288525   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:28.742038   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:28.742784   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:28.757263   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:28.781361   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:28.788085   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:29.241669   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:29.242816   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:29.256558   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:29.288236   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:29.741661   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:29.742516   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:29.757233   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:29.787552   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:30.241288   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:30.243626   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:30.258436   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:30.291738   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:30.741021   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:30.742260   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:30.757669   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:30.781703   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:30.788529   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:31.240451   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:31.241609   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:31.256868   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:31.287638   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:32.159039   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:32.160960   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:32.161294   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:32.161677   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:32.240516   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:32.242838   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:32.258040   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:32.288627   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:32.746804   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:32.748595   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:32.756110   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:32.788907   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:33.242590   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:33.243167   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:33.256890   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:33.280663   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:33.288308   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:33.741740   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:33.745461   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:33.765110   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:33.788541   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:34.284714   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:34.284869   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:34.288190   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:34.289933   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:34.741227   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:34.741367   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:34.757801   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:34.791240   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:35.241343   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:35.242281   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:35.256974   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:35.281699   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:35.288600   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:35.741395   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:35.741940   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:35.756504   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:35.787416   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:36.240169   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:36.242836   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:36.256869   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:36.288362   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:36.741453   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:36.742553   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:36.756855   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:36.789101   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:37.241993   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:37.242583   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:37.256940   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:37.288243   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:37.740711   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:37.742262   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:37.756374   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:37.780728   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:37.787628   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:38.241299   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:38.242408   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:38.256763   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:38.288858   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:38.742515   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:38.742857   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:38.757206   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:38.787590   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:39.241672   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:39.242580   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:39.257041   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:39.341725   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:39.741241   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:39.742269   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:39.757000   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:39.788354   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:40.242331   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:40.242524   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:40.257363   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:40.280607   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:40.287838   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:40.741765   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:40.743166   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:40.756837   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:40.788269   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:41.241194   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:41.242859   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:41.256045   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:41.287753   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:41.740935   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:41.742236   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:41.756391   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:41.790421   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:42.241268   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:42.242733   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:42.257044   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:42.281843   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:42.288785   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:42.741803   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:42.743505   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:42.756908   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:42.788401   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:43.240936   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:43.242621   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:43.257312   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:43.288654   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:43.742575   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:43.842831   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:43.843147   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:43.843178   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:44.240970   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:44.242669   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:44.255879   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:44.288450   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:44.741097   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:44.742376   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:44.756609   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:44.779833   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:44.788389   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:45.240251   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:45.242133   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:45.256358   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:45.288245   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:45.744190   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:45.744310   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:45.757213   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:45.788445   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:46.241342   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:46.241739   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:46.257495   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:46.287746   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:46.740904   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:46.742227   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:46.757165   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:46.781079   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:46.789270   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:47.242086   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:47.242461   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:47.257065   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:47.288547   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:47.741156   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:47.743390   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:47.756449   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:47.788147   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:48.242176   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:48.242225   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:48.257521   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:48.288536   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:48.743423   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:48.743800   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:48.758103   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:48.782046   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:48.788475   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:49.241804   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:49.242616   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:49.257604   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:49.290202   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:49.741029   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:49.742185   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:49.756502   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:49.787990   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:50.241337   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:50.242545   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:50.257018   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:50.287558   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:50.761534   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:50.761562   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:50.762302   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:50.787454   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:51.242415   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:51.242490   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:51.256880   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:51.281932   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:51.289030   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:51.740931   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:51.741510   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:51.756459   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:51.787590   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:52.241731   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:52.242226   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:52.256565   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:52.288296   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:52.741783   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:52.742457   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:52.756661   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:52.787907   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:53.239989   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:53.242141   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:53.256795   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:53.289095   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:53.741557   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:53.742709   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:53.757979   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:53.782006   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:53.789029   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:54.240582   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:54.241726   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:54.256408   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:54.287791   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:54.741436   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:54.741641   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:54.757082   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:54.788060   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:55.241887   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:55.242002   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:55.257422   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:55.287766   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:55.740673   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:55.742171   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:55.756150   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:55.788874   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:56.240880   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:56.242268   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:56.257200   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:56.281512   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:56.287607   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:56.741981   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:56.742109   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:56.756843   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:57.184605   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:57.242757   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:57.243895   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:57.259892   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:57.288282   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:57.740825   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:57.742374   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:57.757128   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:57.793411   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:58.241919   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:58.243342   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:58.257124   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:58.281628   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:33:58.342541   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:58.740898   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:58.742488   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:58.762260   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:58.795716   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:59.241746   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:59.242094   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:59.256547   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:59.289221   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:33:59.742238   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:33:59.742568   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:33:59.757568   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:33:59.787899   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:00.240269   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:00.241838   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:34:00.258101   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:00.287894   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:00.741499   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:00.743514   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:34:00.757918   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:00.781180   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:00.787612   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:01.240466   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:01.242276   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 22:34:01.256229   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:01.287846   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:01.749020   26899 kapi.go:107] duration metric: took 49.510785984s to wait for kubernetes.io/minikube-addons=registry ...
	I1209 22:34:01.750985   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:01.758204   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:01.790226   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:02.241461   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:02.257539   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:02.287835   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:02.741055   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:02.757456   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:02.788542   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:03.241050   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:03.257148   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:03.288092   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:03.288754   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:03.741154   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:03.757296   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:03.788096   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:04.241806   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:04.257518   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:04.288081   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:04.741198   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:04.757752   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:04.788111   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:05.242186   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:05.257266   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:05.341268   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:05.749219   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:05.764318   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:05.788736   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:05.789798   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:06.240888   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:06.256891   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:06.287732   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:06.741676   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:06.757100   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:06.788606   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:07.244336   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:07.258612   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:07.289584   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:07.740382   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:07.756464   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:07.787854   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:08.241619   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:08.256531   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:08.280791   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:08.288681   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:08.750115   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:08.765303   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:08.805207   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:09.241267   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:09.673842   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:09.675446   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:09.777854   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:09.778173   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:09.884909   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:10.242684   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:10.258022   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:10.284348   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:10.288460   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:10.740337   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:10.757895   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:10.788328   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:11.241575   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:11.257073   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:11.341754   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:11.740932   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:11.756609   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:11.788352   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:12.240469   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:12.258520   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:12.289583   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:12.741259   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:12.756810   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:12.780297   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:12.788132   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:13.245214   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:13.258787   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:13.287663   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:13.741547   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:13.757387   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:13.788531   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:14.240917   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:14.257714   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:14.288060   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:14.744167   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:14.757430   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:14.782535   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:14.789235   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:15.243320   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:15.257520   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:15.290068   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:15.741177   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:15.757335   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:15.788364   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:16.240227   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:16.256731   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:16.288037   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:16.741932   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:16.757558   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:16.788801   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:17.242442   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:17.256427   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:17.281337   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:17.288630   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:17.741555   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:17.760122   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:17.787979   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:18.242685   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:18.260742   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 22:34:18.289032   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:18.746433   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:18.762757   26899 kapi.go:107] duration metric: took 1m4.010238048s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1209 22:34:18.789140   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:19.241266   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:19.282796   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:19.289253   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:19.742722   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:19.788226   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:20.241520   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:20.287653   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:20.745036   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:20.788511   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:21.240359   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:21.288361   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:21.740870   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:21.780739   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:21.788541   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:22.240230   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:22.288559   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:22.740961   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:22.788790   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:23.241361   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:23.585963   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:23.742905   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:23.781070   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:23.788790   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:24.241187   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:24.288888   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:24.742154   26899 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 22:34:24.787929   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:25.242465   26899 kapi.go:107] duration metric: took 1m13.00633547s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1209 22:34:25.288678   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:25.785054   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:25.788777   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:26.291783   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:26.788130   26899 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 22:34:27.288796   26899 kapi.go:107] duration metric: took 1m11.003819422s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1209 22:34:27.290719   26899 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-495659 cluster.
	I1209 22:34:27.292052   26899 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1209 22:34:27.293527   26899 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1209 22:34:27.294944   26899 out.go:177] * Enabled addons: amd-gpu-device-plugin, ingress-dns, nvidia-device-plugin, cloud-spanner, storage-provisioner, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1209 22:34:27.296221   26899 addons.go:510] duration metric: took 1m23.183925972s for enable addons: enabled=[amd-gpu-device-plugin ingress-dns nvidia-device-plugin cloud-spanner storage-provisioner inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1209 22:34:28.281187   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:30.282803   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:32.780777   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:34.781024   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:36.782226   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:39.280201   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:41.286389   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:43.781192   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:46.281106   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:48.781341   26899 pod_ready.go:103] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"False"
	I1209 22:34:49.779815   26899 pod_ready.go:93] pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace has status "Ready":"True"
	I1209 22:34:49.779839   26899 pod_ready.go:82] duration metric: took 1m25.505168562s for pod "metrics-server-84c5f94fbc-drvs4" in "kube-system" namespace to be "Ready" ...
	I1209 22:34:49.779848   26899 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-wbphv" in "kube-system" namespace to be "Ready" ...
	I1209 22:34:49.783784   26899 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-wbphv" in "kube-system" namespace has status "Ready":"True"
	I1209 22:34:49.783801   26899 pod_ready.go:82] duration metric: took 3.946164ms for pod "nvidia-device-plugin-daemonset-wbphv" in "kube-system" namespace to be "Ready" ...
	I1209 22:34:49.783863   26899 pod_ready.go:39] duration metric: took 1m37.542205547s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:34:49.783888   26899 api_server.go:52] waiting for apiserver process to appear ...
	I1209 22:34:49.783914   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 22:34:49.783970   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 22:34:49.825921   26899 cri.go:89] found id: "0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72"
	I1209 22:34:49.825945   26899 cri.go:89] found id: ""
	I1209 22:34:49.825953   26899 logs.go:282] 1 containers: [0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72]
	I1209 22:34:49.825996   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:34:49.829776   26899 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 22:34:49.829829   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 22:34:49.870376   26899 cri.go:89] found id: "4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c"
	I1209 22:34:49.870394   26899 cri.go:89] found id: ""
	I1209 22:34:49.870401   26899 logs.go:282] 1 containers: [4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c]
	I1209 22:34:49.870446   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:34:49.874556   26899 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 22:34:49.874606   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 22:34:49.914512   26899 cri.go:89] found id: "0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51"
	I1209 22:34:49.914539   26899 cri.go:89] found id: ""
	I1209 22:34:49.914545   26899 logs.go:282] 1 containers: [0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51]
	I1209 22:34:49.914590   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:34:49.918790   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 22:34:49.918836   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 22:34:49.955423   26899 cri.go:89] found id: "69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb"
	I1209 22:34:49.955441   26899 cri.go:89] found id: ""
	I1209 22:34:49.955448   26899 logs.go:282] 1 containers: [69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb]
	I1209 22:34:49.955499   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:34:49.959129   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 22:34:49.959178   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 22:34:49.997890   26899 cri.go:89] found id: "03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea"
	I1209 22:34:49.997919   26899 cri.go:89] found id: ""
	I1209 22:34:49.997926   26899 logs.go:282] 1 containers: [03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea]
	I1209 22:34:49.997971   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:34:50.001647   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 22:34:50.001700   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 22:34:50.044946   26899 cri.go:89] found id: "3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9"
	I1209 22:34:50.044968   26899 cri.go:89] found id: ""
	I1209 22:34:50.044975   26899 logs.go:282] 1 containers: [3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9]
	I1209 22:34:50.045018   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:34:50.049033   26899 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 22:34:50.049085   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 22:34:50.087997   26899 cri.go:89] found id: ""
	I1209 22:34:50.088020   26899 logs.go:282] 0 containers: []
	W1209 22:34:50.088027   26899 logs.go:284] No container was found matching "kindnet"
	I1209 22:34:50.088036   26899 logs.go:123] Gathering logs for kubelet ...
	I1209 22:34:50.088047   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 22:34:50.145753   26899 logs.go:138] Found kubelet problem: Dec 09 22:33:16 addons-495659 kubelet[1207]: W1209 22:33:16.248349    1207 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-495659" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-495659' and this object
	W1209 22:34:50.145946   26899 logs.go:138] Found kubelet problem: Dec 09 22:33:16 addons-495659 kubelet[1207]: E1209 22:33:16.248388    1207 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-495659\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-495659' and this object" logger="UnhandledError"
	I1209 22:34:50.167423   26899 logs.go:123] Gathering logs for describe nodes ...
	I1209 22:34:50.167450   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 22:34:50.425246   26899 logs.go:123] Gathering logs for kube-scheduler [69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb] ...
	I1209 22:34:50.425272   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb"
	I1209 22:34:50.467699   26899 logs.go:123] Gathering logs for CRI-O ...
	I1209 22:34:50.467729   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 22:34:51.534755   26899 logs.go:123] Gathering logs for dmesg ...
	I1209 22:34:51.534797   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 22:34:51.549092   26899 logs.go:123] Gathering logs for kube-apiserver [0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72] ...
	I1209 22:34:51.549130   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72"
	I1209 22:34:51.597413   26899 logs.go:123] Gathering logs for etcd [4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c] ...
	I1209 22:34:51.597443   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c"
	I1209 22:34:51.658321   26899 logs.go:123] Gathering logs for coredns [0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51] ...
	I1209 22:34:51.658366   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51"
	I1209 22:34:51.696408   26899 logs.go:123] Gathering logs for kube-proxy [03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea] ...
	I1209 22:34:51.696444   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea"
	I1209 22:34:51.733330   26899 logs.go:123] Gathering logs for kube-controller-manager [3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9] ...
	I1209 22:34:51.733358   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9"
	I1209 22:34:51.799961   26899 logs.go:123] Gathering logs for container status ...
	I1209 22:34:51.800000   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 22:34:51.849597   26899 out.go:358] Setting ErrFile to fd 2...
	I1209 22:34:51.849623   26899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 22:34:51.849675   26899 out.go:270] X Problems detected in kubelet:
	W1209 22:34:51.849690   26899 out.go:270]   Dec 09 22:33:16 addons-495659 kubelet[1207]: W1209 22:33:16.248349    1207 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-495659" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-495659' and this object
	W1209 22:34:51.849706   26899 out.go:270]   Dec 09 22:33:16 addons-495659 kubelet[1207]: E1209 22:33:16.248388    1207 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-495659\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-495659' and this object" logger="UnhandledError"
	I1209 22:34:51.849715   26899 out.go:358] Setting ErrFile to fd 2...
	I1209 22:34:51.849723   26899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:35:01.851299   26899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 22:35:01.872018   26899 api_server.go:72] duration metric: took 1m57.759733019s to wait for apiserver process to appear ...
	I1209 22:35:01.872046   26899 api_server.go:88] waiting for apiserver healthz status ...
	I1209 22:35:01.872083   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 22:35:01.872130   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 22:35:01.922822   26899 cri.go:89] found id: "0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72"
	I1209 22:35:01.922847   26899 cri.go:89] found id: ""
	I1209 22:35:01.922857   26899 logs.go:282] 1 containers: [0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72]
	I1209 22:35:01.922913   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:01.927123   26899 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 22:35:01.927179   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 22:35:01.974561   26899 cri.go:89] found id: "4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c"
	I1209 22:35:01.974582   26899 cri.go:89] found id: ""
	I1209 22:35:01.974591   26899 logs.go:282] 1 containers: [4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c]
	I1209 22:35:01.974655   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:01.978685   26899 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 22:35:01.978743   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 22:35:02.018657   26899 cri.go:89] found id: "0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51"
	I1209 22:35:02.018680   26899 cri.go:89] found id: ""
	I1209 22:35:02.018687   26899 logs.go:282] 1 containers: [0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51]
	I1209 22:35:02.018730   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:02.022840   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 22:35:02.022898   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 22:35:02.071243   26899 cri.go:89] found id: "69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb"
	I1209 22:35:02.071271   26899 cri.go:89] found id: ""
	I1209 22:35:02.071279   26899 logs.go:282] 1 containers: [69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb]
	I1209 22:35:02.071330   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:02.076515   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 22:35:02.076584   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 22:35:02.119449   26899 cri.go:89] found id: "03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea"
	I1209 22:35:02.119480   26899 cri.go:89] found id: ""
	I1209 22:35:02.119491   26899 logs.go:282] 1 containers: [03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea]
	I1209 22:35:02.119555   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:02.123723   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 22:35:02.123801   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 22:35:02.169926   26899 cri.go:89] found id: "3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9"
	I1209 22:35:02.169957   26899 cri.go:89] found id: ""
	I1209 22:35:02.169967   26899 logs.go:282] 1 containers: [3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9]
	I1209 22:35:02.170024   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:02.174412   26899 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 22:35:02.174486   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 22:35:02.212103   26899 cri.go:89] found id: ""
	I1209 22:35:02.212136   26899 logs.go:282] 0 containers: []
	W1209 22:35:02.212150   26899 logs.go:284] No container was found matching "kindnet"
	I1209 22:35:02.212162   26899 logs.go:123] Gathering logs for container status ...
	I1209 22:35:02.212177   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 22:35:02.261653   26899 logs.go:123] Gathering logs for kubelet ...
	I1209 22:35:02.261685   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 22:35:02.321962   26899 logs.go:138] Found kubelet problem: Dec 09 22:33:16 addons-495659 kubelet[1207]: W1209 22:33:16.248349    1207 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-495659" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-495659' and this object
	W1209 22:35:02.322137   26899 logs.go:138] Found kubelet problem: Dec 09 22:33:16 addons-495659 kubelet[1207]: E1209 22:33:16.248388    1207 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-495659\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-495659' and this object" logger="UnhandledError"
	I1209 22:35:02.346178   26899 logs.go:123] Gathering logs for describe nodes ...
	I1209 22:35:02.346214   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 22:35:02.469641   26899 logs.go:123] Gathering logs for coredns [0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51] ...
	I1209 22:35:02.469671   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51"
	I1209 22:35:02.510796   26899 logs.go:123] Gathering logs for kube-proxy [03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea] ...
	I1209 22:35:02.510825   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea"
	I1209 22:35:02.548664   26899 logs.go:123] Gathering logs for kube-controller-manager [3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9] ...
	I1209 22:35:02.548693   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9"
	I1209 22:35:02.612895   26899 logs.go:123] Gathering logs for CRI-O ...
	I1209 22:35:02.612934   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 22:35:03.580580   26899 logs.go:123] Gathering logs for dmesg ...
	I1209 22:35:03.580629   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 22:35:03.594934   26899 logs.go:123] Gathering logs for kube-apiserver [0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72] ...
	I1209 22:35:03.594968   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72"
	I1209 22:35:03.638381   26899 logs.go:123] Gathering logs for etcd [4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c] ...
	I1209 22:35:03.638418   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c"
	I1209 22:35:03.716464   26899 logs.go:123] Gathering logs for kube-scheduler [69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb] ...
	I1209 22:35:03.716501   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb"
	I1209 22:35:03.762891   26899 out.go:358] Setting ErrFile to fd 2...
	I1209 22:35:03.762920   26899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 22:35:03.762988   26899 out.go:270] X Problems detected in kubelet:
	W1209 22:35:03.763002   26899 out.go:270]   Dec 09 22:33:16 addons-495659 kubelet[1207]: W1209 22:33:16.248349    1207 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-495659" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-495659' and this object
	W1209 22:35:03.763013   26899 out.go:270]   Dec 09 22:33:16 addons-495659 kubelet[1207]: E1209 22:33:16.248388    1207 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-495659\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-495659' and this object" logger="UnhandledError"
	I1209 22:35:03.763028   26899 out.go:358] Setting ErrFile to fd 2...
	I1209 22:35:03.763040   26899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:35:13.764149   26899 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8443/healthz ...
	I1209 22:35:13.768720   26899 api_server.go:279] https://192.168.39.123:8443/healthz returned 200:
	ok
	I1209 22:35:13.769825   26899 api_server.go:141] control plane version: v1.31.2
	I1209 22:35:13.769854   26899 api_server.go:131] duration metric: took 11.897797249s to wait for apiserver health ...
	I1209 22:35:13.769864   26899 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 22:35:13.769888   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 22:35:13.769980   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 22:35:13.807265   26899 cri.go:89] found id: "0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72"
	I1209 22:35:13.807294   26899 cri.go:89] found id: ""
	I1209 22:35:13.807305   26899 logs.go:282] 1 containers: [0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72]
	I1209 22:35:13.807369   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:13.811313   26899 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 22:35:13.811377   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 22:35:13.854925   26899 cri.go:89] found id: "4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c"
	I1209 22:35:13.854952   26899 cri.go:89] found id: ""
	I1209 22:35:13.854960   26899 logs.go:282] 1 containers: [4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c]
	I1209 22:35:13.855006   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:13.860037   26899 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 22:35:13.860086   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 22:35:13.903000   26899 cri.go:89] found id: "0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51"
	I1209 22:35:13.903021   26899 cri.go:89] found id: ""
	I1209 22:35:13.903028   26899 logs.go:282] 1 containers: [0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51]
	I1209 22:35:13.903072   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:13.908353   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 22:35:13.908407   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 22:35:13.944140   26899 cri.go:89] found id: "69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb"
	I1209 22:35:13.944162   26899 cri.go:89] found id: ""
	I1209 22:35:13.944172   26899 logs.go:282] 1 containers: [69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb]
	I1209 22:35:13.944223   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:13.948012   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 22:35:13.948070   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 22:35:13.983935   26899 cri.go:89] found id: "03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea"
	I1209 22:35:13.983954   26899 cri.go:89] found id: ""
	I1209 22:35:13.983961   26899 logs.go:282] 1 containers: [03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea]
	I1209 22:35:13.984001   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:13.988147   26899 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 22:35:13.988205   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 22:35:14.033548   26899 cri.go:89] found id: "3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9"
	I1209 22:35:14.033571   26899 cri.go:89] found id: ""
	I1209 22:35:14.033582   26899 logs.go:282] 1 containers: [3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9]
	I1209 22:35:14.033641   26899 ssh_runner.go:195] Run: which crictl
	I1209 22:35:14.037633   26899 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 22:35:14.037699   26899 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 22:35:14.074181   26899 cri.go:89] found id: ""
	I1209 22:35:14.074203   26899 logs.go:282] 0 containers: []
	W1209 22:35:14.074214   26899 logs.go:284] No container was found matching "kindnet"
	I1209 22:35:14.074224   26899 logs.go:123] Gathering logs for kube-controller-manager [3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9] ...
	I1209 22:35:14.074238   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9"
	I1209 22:35:14.131215   26899 logs.go:123] Gathering logs for CRI-O ...
	I1209 22:35:14.131247   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 22:35:15.055958   26899 logs.go:123] Gathering logs for kubelet ...
	I1209 22:35:15.056004   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 22:35:15.117052   26899 logs.go:138] Found kubelet problem: Dec 09 22:33:16 addons-495659 kubelet[1207]: W1209 22:33:16.248349    1207 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-495659" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-495659' and this object
	W1209 22:35:15.117238   26899 logs.go:138] Found kubelet problem: Dec 09 22:33:16 addons-495659 kubelet[1207]: E1209 22:33:16.248388    1207 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-495659\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-495659' and this object" logger="UnhandledError"
	I1209 22:35:15.139988   26899 logs.go:123] Gathering logs for describe nodes ...
	I1209 22:35:15.140012   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 22:35:15.274654   26899 logs.go:123] Gathering logs for kube-apiserver [0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72] ...
	I1209 22:35:15.274699   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72"
	I1209 22:35:15.341489   26899 logs.go:123] Gathering logs for etcd [4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c] ...
	I1209 22:35:15.341535   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c"
	I1209 22:35:15.399237   26899 logs.go:123] Gathering logs for kube-proxy [03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea] ...
	I1209 22:35:15.399268   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea"
	I1209 22:35:15.448277   26899 logs.go:123] Gathering logs for dmesg ...
	I1209 22:35:15.448311   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 22:35:15.463322   26899 logs.go:123] Gathering logs for coredns [0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51] ...
	I1209 22:35:15.463357   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51"
	I1209 22:35:15.512136   26899 logs.go:123] Gathering logs for kube-scheduler [69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb] ...
	I1209 22:35:15.512165   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb"
	I1209 22:35:15.564177   26899 logs.go:123] Gathering logs for container status ...
	I1209 22:35:15.564216   26899 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 22:35:15.618057   26899 out.go:358] Setting ErrFile to fd 2...
	I1209 22:35:15.618081   26899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 22:35:15.618130   26899 out.go:270] X Problems detected in kubelet:
	W1209 22:35:15.618140   26899 out.go:270]   Dec 09 22:33:16 addons-495659 kubelet[1207]: W1209 22:33:16.248349    1207 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-495659" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-495659' and this object
	W1209 22:35:15.618151   26899 out.go:270]   Dec 09 22:33:16 addons-495659 kubelet[1207]: E1209 22:33:16.248388    1207 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-495659\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-495659' and this object" logger="UnhandledError"
	I1209 22:35:15.618157   26899 out.go:358] Setting ErrFile to fd 2...
	I1209 22:35:15.618162   26899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:35:25.627925   26899 system_pods.go:59] 18 kube-system pods found
	I1209 22:35:25.627953   26899 system_pods.go:61] "amd-gpu-device-plugin-k9c92" [0ae134a6-d82f-4b75-adef-ebd11156ef7e] Running
	I1209 22:35:25.627958   26899 system_pods.go:61] "coredns-7c65d6cfc9-d7jm7" [d8dad938-bb60-4879-907c-12003e131d8e] Running
	I1209 22:35:25.627962   26899 system_pods.go:61] "csi-hostpath-attacher-0" [9df0b766-98a8-45e9-a41a-b2d57a6f0b69] Running
	I1209 22:35:25.627966   26899 system_pods.go:61] "csi-hostpath-resizer-0" [1b9c7557-95a8-4767-8ae9-5765b9249de1] Running
	I1209 22:35:25.627969   26899 system_pods.go:61] "csi-hostpathplugin-g2mgw" [9d710134-71c5-4a26-86cd-f58e421e155c] Running
	I1209 22:35:25.627973   26899 system_pods.go:61] "etcd-addons-495659" [ad9e1594-8b6b-4f6b-a2b2-ba6c27608281] Running
	I1209 22:35:25.627977   26899 system_pods.go:61] "kube-apiserver-addons-495659" [8e8b50f7-6b12-436e-8373-822f3a7dce46] Running
	I1209 22:35:25.627981   26899 system_pods.go:61] "kube-controller-manager-addons-495659" [050e1ad7-dfe2-4dfd-aade-ba853c720d25] Running
	I1209 22:35:25.627985   26899 system_pods.go:61] "kube-ingress-dns-minikube" [2bccaa8d-e874-466c-96e6-476f10eab5b5] Running
	I1209 22:35:25.627988   26899 system_pods.go:61] "kube-proxy-x6vmt" [f74e8d2a-5b4f-4e61-8783-167e45a70839] Running
	I1209 22:35:25.627992   26899 system_pods.go:61] "kube-scheduler-addons-495659" [7dfad718-626c-4238-8c31-891a41614578] Running
	I1209 22:35:25.627996   26899 system_pods.go:61] "metrics-server-84c5f94fbc-drvs4" [697234f5-8b91-4bd8-9d7a-681c7fd5c8b3] Running
	I1209 22:35:25.628002   26899 system_pods.go:61] "nvidia-device-plugin-daemonset-wbphv" [373a99a7-1c49-427a-931d-f6d3bcb7cc29] Running
	I1209 22:35:25.628010   26899 system_pods.go:61] "registry-5cc95cd69-m98x5" [ecb1f96a-9905-45be-b670-6791c5067c07] Running
	I1209 22:35:25.628015   26899 system_pods.go:61] "registry-proxy-xqgz7" [8103c584-faf4-4900-8fda-b5367b887c19] Running
	I1209 22:35:25.628020   26899 system_pods.go:61] "snapshot-controller-56fcc65765-b5gd5" [96a1edd3-1afc-4328-804d-8e1a4b5c0655] Running
	I1209 22:35:25.628028   26899 system_pods.go:61] "snapshot-controller-56fcc65765-pz724" [8ef3c979-b020-4950-835f-4960308d5a38] Running
	I1209 22:35:25.628033   26899 system_pods.go:61] "storage-provisioner" [1c9a6458-b9f3-47d5-af12-07b1a97dbcdd] Running
	I1209 22:35:25.628044   26899 system_pods.go:74] duration metric: took 11.858172377s to wait for pod list to return data ...
	I1209 22:35:25.628058   26899 default_sa.go:34] waiting for default service account to be created ...
	I1209 22:35:25.630469   26899 default_sa.go:45] found service account: "default"
	I1209 22:35:25.630489   26899 default_sa.go:55] duration metric: took 2.422445ms for default service account to be created ...
	I1209 22:35:25.630497   26899 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 22:35:25.639968   26899 system_pods.go:86] 18 kube-system pods found
	I1209 22:35:25.639995   26899 system_pods.go:89] "amd-gpu-device-plugin-k9c92" [0ae134a6-d82f-4b75-adef-ebd11156ef7e] Running
	I1209 22:35:25.640003   26899 system_pods.go:89] "coredns-7c65d6cfc9-d7jm7" [d8dad938-bb60-4879-907c-12003e131d8e] Running
	I1209 22:35:25.640008   26899 system_pods.go:89] "csi-hostpath-attacher-0" [9df0b766-98a8-45e9-a41a-b2d57a6f0b69] Running
	I1209 22:35:25.640015   26899 system_pods.go:89] "csi-hostpath-resizer-0" [1b9c7557-95a8-4767-8ae9-5765b9249de1] Running
	I1209 22:35:25.640021   26899 system_pods.go:89] "csi-hostpathplugin-g2mgw" [9d710134-71c5-4a26-86cd-f58e421e155c] Running
	I1209 22:35:25.640030   26899 system_pods.go:89] "etcd-addons-495659" [ad9e1594-8b6b-4f6b-a2b2-ba6c27608281] Running
	I1209 22:35:25.640036   26899 system_pods.go:89] "kube-apiserver-addons-495659" [8e8b50f7-6b12-436e-8373-822f3a7dce46] Running
	I1209 22:35:25.640044   26899 system_pods.go:89] "kube-controller-manager-addons-495659" [050e1ad7-dfe2-4dfd-aade-ba853c720d25] Running
	I1209 22:35:25.640050   26899 system_pods.go:89] "kube-ingress-dns-minikube" [2bccaa8d-e874-466c-96e6-476f10eab5b5] Running
	I1209 22:35:25.640060   26899 system_pods.go:89] "kube-proxy-x6vmt" [f74e8d2a-5b4f-4e61-8783-167e45a70839] Running
	I1209 22:35:25.640066   26899 system_pods.go:89] "kube-scheduler-addons-495659" [7dfad718-626c-4238-8c31-891a41614578] Running
	I1209 22:35:25.640072   26899 system_pods.go:89] "metrics-server-84c5f94fbc-drvs4" [697234f5-8b91-4bd8-9d7a-681c7fd5c8b3] Running
	I1209 22:35:25.640080   26899 system_pods.go:89] "nvidia-device-plugin-daemonset-wbphv" [373a99a7-1c49-427a-931d-f6d3bcb7cc29] Running
	I1209 22:35:25.640084   26899 system_pods.go:89] "registry-5cc95cd69-m98x5" [ecb1f96a-9905-45be-b670-6791c5067c07] Running
	I1209 22:35:25.640087   26899 system_pods.go:89] "registry-proxy-xqgz7" [8103c584-faf4-4900-8fda-b5367b887c19] Running
	I1209 22:35:25.640094   26899 system_pods.go:89] "snapshot-controller-56fcc65765-b5gd5" [96a1edd3-1afc-4328-804d-8e1a4b5c0655] Running
	I1209 22:35:25.640097   26899 system_pods.go:89] "snapshot-controller-56fcc65765-pz724" [8ef3c979-b020-4950-835f-4960308d5a38] Running
	I1209 22:35:25.640100   26899 system_pods.go:89] "storage-provisioner" [1c9a6458-b9f3-47d5-af12-07b1a97dbcdd] Running
	I1209 22:35:25.640106   26899 system_pods.go:126] duration metric: took 9.603358ms to wait for k8s-apps to be running ...
	I1209 22:35:25.640114   26899 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 22:35:25.640157   26899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:35:25.655968   26899 system_svc.go:56] duration metric: took 15.843283ms WaitForService to wait for kubelet
	I1209 22:35:25.655997   26899 kubeadm.go:582] duration metric: took 2m21.543718454s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 22:35:25.656027   26899 node_conditions.go:102] verifying NodePressure condition ...
	I1209 22:35:25.659154   26899 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:35:25.659181   26899 node_conditions.go:123] node cpu capacity is 2
	I1209 22:35:25.659197   26899 node_conditions.go:105] duration metric: took 3.165147ms to run NodePressure ...
	I1209 22:35:25.659210   26899 start.go:241] waiting for startup goroutines ...
	I1209 22:35:25.659225   26899 start.go:246] waiting for cluster config update ...
	I1209 22:35:25.659250   26899 start.go:255] writing updated cluster config ...
	I1209 22:35:25.659525   26899 ssh_runner.go:195] Run: rm -f paused
	I1209 22:35:25.708414   26899 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 22:35:25.711220   26899 out.go:177] * Done! kubectl is now configured to use "addons-495659" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.027848244Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784111027824993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51d569a7-f5b4-4476-afdb-3a57ffab9cee name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.028319769Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a42aa689-f488-4f00-9d18-c0e0b8771f4b name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.028372775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a42aa689-f488-4f00-9d18-c0e0b8771f4b name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.028645544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11979c84010feefe1d368ebad2f3e8736bb51eccea443678936c11232a0c9890,PodSandboxId:c45a33d73a66c4cd77d18e7cea91840421e79ebbe42bee23c9b560d8bc4a4336,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733783906559638236,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-r8srv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 296e2cce-48cc-4570-a9b7-bdd8f1dcc383,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c441db11bc82d36cb58c484d982f2b33d9746e3cfb59ab7e50f44d2a8f82beed,PodSandboxId:e637581f75904b9e88a63f5f1514109b9b225e11b150c77d97496a39d80a8e24,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733783769253960753,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e3e4108-ab04-4c22-9a48-b5b6431d743f,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceb3922f9f55ef7df0a0d5801bdd4a62c5a19c8dd9e2473dd0472aefdeebe31,PodSandboxId:87389b2b153f07c1dccbc9f998efb0b10c01a817c5e5e8d2d894091b5896fb2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733783728876642001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 305d5fb6-4c01-480c-9
f96-855b1c53733a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1253a32a552c9653ab385d2f4f005ae0b45d6e44a29b4644f4165d28518de200,PodSandboxId:92a3cc43270e6193b5f72e174a4acecb10ad1c904c6ace4acf749cc538497d46,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733783633362603291,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-srq65,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: bc7e0933-639d-4e14-9285-644308291889,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199c999b39deffa7789b0eee1bc6ab3af1a3190deedcaf06522abe738cafc790,PodSandboxId:86a0860e2152fe0d9a304ce6c5625a736bdbcc772a5f8ab9c2d6b629f0063e2f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733783623675264185,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-84c5f94fbc-drvs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 697234f5-8b91-4bd8-9d7a-681c7fd5c8b3,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac972c6ca8a2e0ccd932e682297bdc569dfc685d1ce8b88b78efc70298f3e4d,PodSandboxId:7529bf3e5c9afc3423fe4f4e149effa090097df337540b079d945a086719156f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733783602468855120,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-k9c92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ae134a6-d82f-4b75-adef-ebd11156ef7e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c23be9fca0fc7f6d765c3552106eab10700f65628eb7ccd437a3bcfa0d5b9a,PodSandboxId:f93fdf924f8309c3effe372036e1e64c865bf9c5159ee3dd30d342fbf027fad6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733783591387029260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9a6458-b9f3-47d5-af12-07b1a97dbcdd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51,PodSandboxId:ab020f040f68b3cc195fc7eb1a9ffb524b923e471ceb2522b7054cfe0365ee67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173378358
8887036524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d7jm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8dad938-bb60-4879-907c-12003e131d8e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea,PodSandboxId:d28e9ffc762b0993c068c93892e6f5dfd007dea9dca19255d14d0e5d057c253d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733783585550402292,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x6vmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74e8d2a-5b4f-4e61-8783-167e45a70839,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb,PodSandboxId:ef109cc31f866b10c954e785db5445ee07e0101c1145ef2cc92cc2849ad037f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733783574087861658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94fbc14579012aee9e2dfe0c623f852b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c,PodSandboxId:2716276c108c2ec300f07d9d29435f94f1fc9402422ff2da602d63b66cb63f40,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733783574095421255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805474f477faf5b0e135611efdccbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9,PodSandboxId:a1fdb7b4361a8cac11b6187d63408377a794fe44997e45cd6bc0d84e75a962e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733783574077026813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d09ee7fe7fafb23694a51345ea774ec,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72,PodSandboxId:460a571ff8a02351cfe1cb4af70e89ff62082982704050dc4975bc0f9a09af96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733783574086955131,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 931dcf38217fed4a6c2f86f5eefa5ddb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a42aa689-f488-4f00-9d18-c0e0b8771f4b name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.064432868Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31948229-cdb3-42b5-8f71-990d6a0468b0 name=/runtime.v1.RuntimeService/Version
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.064502890Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31948229-cdb3-42b5-8f71-990d6a0468b0 name=/runtime.v1.RuntimeService/Version
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.065497342Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d08ded85-2ee6-4c0b-9d1b-7891409e9490 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.066865611Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784111066840606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d08ded85-2ee6-4c0b-9d1b-7891409e9490 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.067546043Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e041e704-b2a9-45fd-8269-ef2b8b361532 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.067610828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e041e704-b2a9-45fd-8269-ef2b8b361532 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.067951921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11979c84010feefe1d368ebad2f3e8736bb51eccea443678936c11232a0c9890,PodSandboxId:c45a33d73a66c4cd77d18e7cea91840421e79ebbe42bee23c9b560d8bc4a4336,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733783906559638236,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-r8srv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 296e2cce-48cc-4570-a9b7-bdd8f1dcc383,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c441db11bc82d36cb58c484d982f2b33d9746e3cfb59ab7e50f44d2a8f82beed,PodSandboxId:e637581f75904b9e88a63f5f1514109b9b225e11b150c77d97496a39d80a8e24,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733783769253960753,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e3e4108-ab04-4c22-9a48-b5b6431d743f,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceb3922f9f55ef7df0a0d5801bdd4a62c5a19c8dd9e2473dd0472aefdeebe31,PodSandboxId:87389b2b153f07c1dccbc9f998efb0b10c01a817c5e5e8d2d894091b5896fb2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733783728876642001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 305d5fb6-4c01-480c-9
f96-855b1c53733a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1253a32a552c9653ab385d2f4f005ae0b45d6e44a29b4644f4165d28518de200,PodSandboxId:92a3cc43270e6193b5f72e174a4acecb10ad1c904c6ace4acf749cc538497d46,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733783633362603291,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-srq65,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: bc7e0933-639d-4e14-9285-644308291889,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199c999b39deffa7789b0eee1bc6ab3af1a3190deedcaf06522abe738cafc790,PodSandboxId:86a0860e2152fe0d9a304ce6c5625a736bdbcc772a5f8ab9c2d6b629f0063e2f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733783623675264185,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-84c5f94fbc-drvs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 697234f5-8b91-4bd8-9d7a-681c7fd5c8b3,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac972c6ca8a2e0ccd932e682297bdc569dfc685d1ce8b88b78efc70298f3e4d,PodSandboxId:7529bf3e5c9afc3423fe4f4e149effa090097df337540b079d945a086719156f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733783602468855120,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-k9c92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ae134a6-d82f-4b75-adef-ebd11156ef7e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c23be9fca0fc7f6d765c3552106eab10700f65628eb7ccd437a3bcfa0d5b9a,PodSandboxId:f93fdf924f8309c3effe372036e1e64c865bf9c5159ee3dd30d342fbf027fad6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733783591387029260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9a6458-b9f3-47d5-af12-07b1a97dbcdd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51,PodSandboxId:ab020f040f68b3cc195fc7eb1a9ffb524b923e471ceb2522b7054cfe0365ee67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173378358
8887036524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d7jm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8dad938-bb60-4879-907c-12003e131d8e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea,PodSandboxId:d28e9ffc762b0993c068c93892e6f5dfd007dea9dca19255d14d0e5d057c253d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733783585550402292,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x6vmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74e8d2a-5b4f-4e61-8783-167e45a70839,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb,PodSandboxId:ef109cc31f866b10c954e785db5445ee07e0101c1145ef2cc92cc2849ad037f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733783574087861658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94fbc14579012aee9e2dfe0c623f852b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c,PodSandboxId:2716276c108c2ec300f07d9d29435f94f1fc9402422ff2da602d63b66cb63f40,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733783574095421255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805474f477faf5b0e135611efdccbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9,PodSandboxId:a1fdb7b4361a8cac11b6187d63408377a794fe44997e45cd6bc0d84e75a962e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733783574077026813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d09ee7fe7fafb23694a51345ea774ec,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72,PodSandboxId:460a571ff8a02351cfe1cb4af70e89ff62082982704050dc4975bc0f9a09af96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733783574086955131,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 931dcf38217fed4a6c2f86f5eefa5ddb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e041e704-b2a9-45fd-8269-ef2b8b361532 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.102796169Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca2808e0-034c-404a-8210-1e405832b791 name=/runtime.v1.RuntimeService/Version
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.102875588Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca2808e0-034c-404a-8210-1e405832b791 name=/runtime.v1.RuntimeService/Version
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.104082941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36abc079-4ce6-4c49-a720-b50c9c569583 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.105952608Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784111105887026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36abc079-4ce6-4c49-a720-b50c9c569583 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.106985376Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d009c0bf-8b77-4282-a231-0490ee5a8a19 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.107237459Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d009c0bf-8b77-4282-a231-0490ee5a8a19 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.108361704Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11979c84010feefe1d368ebad2f3e8736bb51eccea443678936c11232a0c9890,PodSandboxId:c45a33d73a66c4cd77d18e7cea91840421e79ebbe42bee23c9b560d8bc4a4336,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733783906559638236,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-r8srv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 296e2cce-48cc-4570-a9b7-bdd8f1dcc383,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c441db11bc82d36cb58c484d982f2b33d9746e3cfb59ab7e50f44d2a8f82beed,PodSandboxId:e637581f75904b9e88a63f5f1514109b9b225e11b150c77d97496a39d80a8e24,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733783769253960753,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e3e4108-ab04-4c22-9a48-b5b6431d743f,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceb3922f9f55ef7df0a0d5801bdd4a62c5a19c8dd9e2473dd0472aefdeebe31,PodSandboxId:87389b2b153f07c1dccbc9f998efb0b10c01a817c5e5e8d2d894091b5896fb2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733783728876642001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 305d5fb6-4c01-480c-9
f96-855b1c53733a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1253a32a552c9653ab385d2f4f005ae0b45d6e44a29b4644f4165d28518de200,PodSandboxId:92a3cc43270e6193b5f72e174a4acecb10ad1c904c6ace4acf749cc538497d46,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733783633362603291,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-srq65,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: bc7e0933-639d-4e14-9285-644308291889,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199c999b39deffa7789b0eee1bc6ab3af1a3190deedcaf06522abe738cafc790,PodSandboxId:86a0860e2152fe0d9a304ce6c5625a736bdbcc772a5f8ab9c2d6b629f0063e2f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733783623675264185,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-84c5f94fbc-drvs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 697234f5-8b91-4bd8-9d7a-681c7fd5c8b3,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac972c6ca8a2e0ccd932e682297bdc569dfc685d1ce8b88b78efc70298f3e4d,PodSandboxId:7529bf3e5c9afc3423fe4f4e149effa090097df337540b079d945a086719156f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733783602468855120,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-k9c92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ae134a6-d82f-4b75-adef-ebd11156ef7e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c23be9fca0fc7f6d765c3552106eab10700f65628eb7ccd437a3bcfa0d5b9a,PodSandboxId:f93fdf924f8309c3effe372036e1e64c865bf9c5159ee3dd30d342fbf027fad6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733783591387029260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9a6458-b9f3-47d5-af12-07b1a97dbcdd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51,PodSandboxId:ab020f040f68b3cc195fc7eb1a9ffb524b923e471ceb2522b7054cfe0365ee67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173378358
8887036524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d7jm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8dad938-bb60-4879-907c-12003e131d8e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea,PodSandboxId:d28e9ffc762b0993c068c93892e6f5dfd007dea9dca19255d14d0e5d057c253d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733783585550402292,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x6vmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74e8d2a-5b4f-4e61-8783-167e45a70839,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb,PodSandboxId:ef109cc31f866b10c954e785db5445ee07e0101c1145ef2cc92cc2849ad037f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733783574087861658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94fbc14579012aee9e2dfe0c623f852b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c,PodSandboxId:2716276c108c2ec300f07d9d29435f94f1fc9402422ff2da602d63b66cb63f40,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733783574095421255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805474f477faf5b0e135611efdccbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9,PodSandboxId:a1fdb7b4361a8cac11b6187d63408377a794fe44997e45cd6bc0d84e75a962e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733783574077026813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d09ee7fe7fafb23694a51345ea774ec,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72,PodSandboxId:460a571ff8a02351cfe1cb4af70e89ff62082982704050dc4975bc0f9a09af96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733783574086955131,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 931dcf38217fed4a6c2f86f5eefa5ddb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d009c0bf-8b77-4282-a231-0490ee5a8a19 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.145885302Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5fc3559f-21b9-4fe4-a456-aa6d92cee961 name=/runtime.v1.RuntimeService/Version
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.146002809Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5fc3559f-21b9-4fe4-a456-aa6d92cee961 name=/runtime.v1.RuntimeService/Version
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.147143954Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9adb1431-684a-4196-a516-2c5dc8da10dd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.148345508Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784111148317593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9adb1431-684a-4196-a516-2c5dc8da10dd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.148868760Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33178648-ec78-48ac-bd75-e73c05497627 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.148979948Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33178648-ec78-48ac-bd75-e73c05497627 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:41:51 addons-495659 crio[661]: time="2024-12-09 22:41:51.149271886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11979c84010feefe1d368ebad2f3e8736bb51eccea443678936c11232a0c9890,PodSandboxId:c45a33d73a66c4cd77d18e7cea91840421e79ebbe42bee23c9b560d8bc4a4336,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733783906559638236,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-r8srv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 296e2cce-48cc-4570-a9b7-bdd8f1dcc383,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c441db11bc82d36cb58c484d982f2b33d9746e3cfb59ab7e50f44d2a8f82beed,PodSandboxId:e637581f75904b9e88a63f5f1514109b9b225e11b150c77d97496a39d80a8e24,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733783769253960753,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e3e4108-ab04-4c22-9a48-b5b6431d743f,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fceb3922f9f55ef7df0a0d5801bdd4a62c5a19c8dd9e2473dd0472aefdeebe31,PodSandboxId:87389b2b153f07c1dccbc9f998efb0b10c01a817c5e5e8d2d894091b5896fb2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733783728876642001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 305d5fb6-4c01-480c-9
f96-855b1c53733a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1253a32a552c9653ab385d2f4f005ae0b45d6e44a29b4644f4165d28518de200,PodSandboxId:92a3cc43270e6193b5f72e174a4acecb10ad1c904c6ace4acf749cc538497d46,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733783633362603291,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-srq65,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: bc7e0933-639d-4e14-9285-644308291889,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199c999b39deffa7789b0eee1bc6ab3af1a3190deedcaf06522abe738cafc790,PodSandboxId:86a0860e2152fe0d9a304ce6c5625a736bdbcc772a5f8ab9c2d6b629f0063e2f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733783623675264185,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-84c5f94fbc-drvs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 697234f5-8b91-4bd8-9d7a-681c7fd5c8b3,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac972c6ca8a2e0ccd932e682297bdc569dfc685d1ce8b88b78efc70298f3e4d,PodSandboxId:7529bf3e5c9afc3423fe4f4e149effa090097df337540b079d945a086719156f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733783602468855120,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-k9c92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ae134a6-d82f-4b75-adef-ebd11156ef7e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c23be9fca0fc7f6d765c3552106eab10700f65628eb7ccd437a3bcfa0d5b9a,PodSandboxId:f93fdf924f8309c3effe372036e1e64c865bf9c5159ee3dd30d342fbf027fad6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733783591387029260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9a6458-b9f3-47d5-af12-07b1a97dbcdd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51,PodSandboxId:ab020f040f68b3cc195fc7eb1a9ffb524b923e471ceb2522b7054cfe0365ee67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173378358
8887036524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-d7jm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8dad938-bb60-4879-907c-12003e131d8e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea,PodSandboxId:d28e9ffc762b0993c068c93892e6f5dfd007dea9dca19255d14d0e5d057c253d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733783585550402292,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x6vmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74e8d2a-5b4f-4e61-8783-167e45a70839,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb,PodSandboxId:ef109cc31f866b10c954e785db5445ee07e0101c1145ef2cc92cc2849ad037f0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733783574087861658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94fbc14579012aee9e2dfe0c623f852b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c,PodSandboxId:2716276c108c2ec300f07d9d29435f94f1fc9402422ff2da602d63b66cb63f40,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733783574095421255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805474f477faf5b0e135611efdccbfa2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9,PodSandboxId:a1fdb7b4361a8cac11b6187d63408377a794fe44997e45cd6bc0d84e75a962e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733783574077026813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d09ee7fe7fafb23694a51345ea774ec,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72,PodSandboxId:460a571ff8a02351cfe1cb4af70e89ff62082982704050dc4975bc0f9a09af96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733783574086955131,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-495659,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 931dcf38217fed4a6c2f86f5eefa5ddb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33178648-ec78-48ac-bd75-e73c05497627 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	11979c84010fe       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   c45a33d73a66c       hello-world-app-55bf9c44b4-r8srv
	c441db11bc82d       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         5 minutes ago       Running             nginx                     0                   e637581f75904       nginx
	fceb3922f9f55       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   87389b2b153f0       busybox
	1253a32a552c9       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   92a3cc43270e6       local-path-provisioner-86d989889c-srq65
	199c999b39def       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   8 minutes ago       Running             metrics-server            0                   86a0860e2152f       metrics-server-84c5f94fbc-drvs4
	9ac972c6ca8a2       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                8 minutes ago       Running             amd-gpu-device-plugin     0                   7529bf3e5c9af       amd-gpu-device-plugin-k9c92
	e0c23be9fca0f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   f93fdf924f830       storage-provisioner
	0db318df65ff7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        8 minutes ago       Running             coredns                   0                   ab020f040f68b       coredns-7c65d6cfc9-d7jm7
	03167612b8d46       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        8 minutes ago       Running             kube-proxy                0                   d28e9ffc762b0       kube-proxy-x6vmt
	4d807bef69ecb       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        8 minutes ago       Running             etcd                      0                   2716276c108c2       etcd-addons-495659
	69519752b978b       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        8 minutes ago       Running             kube-scheduler            0                   ef109cc31f866       kube-scheduler-addons-495659
	0a7dd6f001e51       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        8 minutes ago       Running             kube-apiserver            0                   460a571ff8a02       kube-apiserver-addons-495659
	3ce76bec56eb8       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        8 minutes ago       Running             kube-controller-manager   0                   a1fdb7b4361a8       kube-controller-manager-addons-495659
	
	
	==> coredns [0db318df65ff7495e71d8db136b9900162fd87e0838bec0ffc6ac018751dbd51] <==
	[INFO] 10.244.0.22:47297 - 27627 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000101444s
	[INFO] 10.244.0.22:44910 - 18616 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000118719s
	[INFO] 10.244.0.22:47297 - 42515 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000061381s
	[INFO] 10.244.0.22:44910 - 20639 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00013512s
	[INFO] 10.244.0.22:47297 - 34437 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045403s
	[INFO] 10.244.0.22:44910 - 28556 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000165491s
	[INFO] 10.244.0.22:44910 - 27178 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00006474s
	[INFO] 10.244.0.22:47297 - 64569 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041038s
	[INFO] 10.244.0.22:44910 - 34187 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000070518s
	[INFO] 10.244.0.22:47297 - 16807 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043004s
	[INFO] 10.244.0.22:47297 - 20653 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00007964s
	[INFO] 10.244.0.22:45760 - 51499 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000089979s
	[INFO] 10.244.0.22:60049 - 37082 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00003534s
	[INFO] 10.244.0.22:60049 - 31882 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000078126s
	[INFO] 10.244.0.22:45760 - 9746 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00006631s
	[INFO] 10.244.0.22:45760 - 47620 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000067612s
	[INFO] 10.244.0.22:60049 - 23649 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000034662s
	[INFO] 10.244.0.22:60049 - 51888 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00010613s
	[INFO] 10.244.0.22:45760 - 3491 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077345s
	[INFO] 10.244.0.22:60049 - 62201 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000238534s
	[INFO] 10.244.0.22:60049 - 65060 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000066462s
	[INFO] 10.244.0.22:45760 - 59742 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000764s
	[INFO] 10.244.0.22:45760 - 21479 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000049244s
	[INFO] 10.244.0.22:60049 - 56798 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00009191s
	[INFO] 10.244.0.22:45760 - 42017 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000080139s
	
	
	==> describe nodes <==
	Name:               addons-495659
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-495659
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=addons-495659
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T22_32_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-495659
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 22:32:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-495659
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 22:41:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 22:38:35 +0000   Mon, 09 Dec 2024 22:32:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 22:38:35 +0000   Mon, 09 Dec 2024 22:32:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 22:38:35 +0000   Mon, 09 Dec 2024 22:32:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 22:38:35 +0000   Mon, 09 Dec 2024 22:33:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    addons-495659
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f7fc1d23c4447a6b647c74af79ff52c
	  System UUID:                6f7fc1d2-3c44-47a6-b647-c74af79ff52c
	  Boot ID:                    e0437aa1-375f-4d05-8d44-cfd4e70449ec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  default                     hello-world-app-55bf9c44b4-r8srv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m28s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 amd-gpu-device-plugin-k9c92                0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m45s
	  kube-system                 coredns-7c65d6cfc9-d7jm7                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m47s
	  kube-system                 etcd-addons-495659                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m52s
	  kube-system                 kube-apiserver-addons-495659               250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m52s
	  kube-system                 kube-controller-manager-addons-495659      200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m52s
	  kube-system                 kube-proxy-x6vmt                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m48s
	  kube-system                 kube-scheduler-addons-495659               100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m52s
	  kube-system                 metrics-server-84c5f94fbc-drvs4            100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         8m43s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m42s
	  local-path-storage          local-path-provisioner-86d989889c-srq65    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m44s  kube-proxy       
	  Normal  Starting                 8m52s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m52s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m52s  kubelet          Node addons-495659 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m52s  kubelet          Node addons-495659 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m52s  kubelet          Node addons-495659 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m51s  kubelet          Node addons-495659 status is now: NodeReady
	  Normal  RegisteredNode           8m48s  node-controller  Node addons-495659 event: Registered Node addons-495659 in Controller
	
	
	==> dmesg <==
	[  +0.088011] kauditd_printk_skb: 69 callbacks suppressed
	[Dec 9 22:33] systemd-fstab-generator[1335]: Ignoring "noauto" option for root device
	[  +0.167220] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.010973] kauditd_printk_skb: 116 callbacks suppressed
	[  +5.272101] kauditd_printk_skb: 131 callbacks suppressed
	[  +5.271276] kauditd_printk_skb: 71 callbacks suppressed
	[ +14.489459] kauditd_printk_skb: 15 callbacks suppressed
	[ +10.084986] kauditd_printk_skb: 4 callbacks suppressed
	[ +13.309084] kauditd_printk_skb: 2 callbacks suppressed
	[Dec 9 22:34] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.064148] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.161304] kauditd_printk_skb: 42 callbacks suppressed
	[  +9.179112] kauditd_printk_skb: 22 callbacks suppressed
	[ +13.874383] kauditd_printk_skb: 18 callbacks suppressed
	[Dec 9 22:35] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.115483] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.055871] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.326466] kauditd_printk_skb: 34 callbacks suppressed
	[Dec 9 22:36] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.089690] kauditd_printk_skb: 64 callbacks suppressed
	[ +12.876936] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.667611] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.459316] kauditd_printk_skb: 10 callbacks suppressed
	[Dec 9 22:38] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.595219] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [4d807bef69ecb1b1075de78cd6348ea072c92372b7bd5adaea05003dc4c7869c] <==
	{"level":"info","ts":"2024-12-09T22:34:23.570162Z","caller":"traceutil/trace.go:171","msg":"trace[308595677] linearizableReadLoop","detail":"{readStateIndex:1132; appliedIndex:1131; }","duration":"303.724239ms","start":"2024-12-09T22:34:23.266422Z","end":"2024-12-09T22:34:23.570147Z","steps":["trace[308595677] 'read index received'  (duration: 303.634457ms)","trace[308595677] 'applied index is now lower than readState.Index'  (duration: 89.163µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T22:34:23.570305Z","caller":"traceutil/trace.go:171","msg":"trace[1535974128] transaction","detail":"{read_only:false; response_revision:1102; number_of_response:1; }","duration":"310.987393ms","start":"2024-12-09T22:34:23.259310Z","end":"2024-12-09T22:34:23.570297Z","steps":["trace[1535974128] 'process raft request'  (duration: 310.738516ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T22:34:23.570404Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"293.628697ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T22:34:23.570458Z","caller":"traceutil/trace.go:171","msg":"trace[1597284253] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1102; }","duration":"293.690451ms","start":"2024-12-09T22:34:23.276759Z","end":"2024-12-09T22:34:23.570449Z","steps":["trace[1597284253] 'agreement among raft nodes before linearized reading'  (duration: 293.594454ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T22:34:23.570424Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T22:34:23.259295Z","time spent":"311.051041ms","remote":"127.0.0.1:34386","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":482,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1090 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:419 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-12-09T22:34:23.570657Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"304.244691ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-drvs4\" ","response":"range_response_count:1 size:4566"}
	{"level":"info","ts":"2024-12-09T22:34:23.570689Z","caller":"traceutil/trace.go:171","msg":"trace[179489460] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-drvs4; range_end:; response_count:1; response_revision:1102; }","duration":"304.278862ms","start":"2024-12-09T22:34:23.266405Z","end":"2024-12-09T22:34:23.570684Z","steps":["trace[179489460] 'agreement among raft nodes before linearized reading'  (duration: 304.167495ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T22:34:23.570708Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T22:34:23.266350Z","time spent":"304.35291ms","remote":"127.0.0.1:34300","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4589,"request content":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-drvs4\" "}
	{"level":"warn","ts":"2024-12-09T22:34:23.570801Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.524643ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-12-09T22:34:23.570831Z","caller":"traceutil/trace.go:171","msg":"trace[1380735404] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotclasses0; response_count:0; response_revision:1102; }","duration":"186.556079ms","start":"2024-12-09T22:34:23.384269Z","end":"2024-12-09T22:34:23.570825Z","steps":["trace[1380735404] 'agreement among raft nodes before linearized reading'  (duration: 186.513114ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T22:34:55.917575Z","caller":"traceutil/trace.go:171","msg":"trace[1843404297] transaction","detail":"{read_only:false; response_revision:1203; number_of_response:1; }","duration":"124.587896ms","start":"2024-12-09T22:34:55.792973Z","end":"2024-12-09T22:34:55.917561Z","steps":["trace[1843404297] 'process raft request'  (duration: 124.280417ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T22:36:09.150831Z","caller":"traceutil/trace.go:171","msg":"trace[1874001647] linearizableReadLoop","detail":"{readStateIndex:1634; appliedIndex:1633; }","duration":"102.512757ms","start":"2024-12-09T22:36:09.048305Z","end":"2024-12-09T22:36:09.150818Z","steps":["trace[1874001647] 'read index received'  (duration: 102.385844ms)","trace[1874001647] 'applied index is now lower than readState.Index'  (duration: 126.23µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T22:36:09.151186Z","caller":"traceutil/trace.go:171","msg":"trace[1647848914] transaction","detail":"{read_only:false; response_revision:1572; number_of_response:1; }","duration":"356.141011ms","start":"2024-12-09T22:36:08.795035Z","end":"2024-12-09T22:36:09.151176Z","steps":["trace[1647848914] 'process raft request'  (duration: 355.699702ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T22:36:09.152090Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T22:36:08.795022Z","time spent":"357.01071ms","remote":"127.0.0.1:34386","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1532 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-12-09T22:36:09.151284Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.970403ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T22:36:09.152563Z","caller":"traceutil/trace.go:171","msg":"trace[2107284344] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1572; }","duration":"104.255936ms","start":"2024-12-09T22:36:09.048294Z","end":"2024-12-09T22:36:09.152550Z","steps":["trace[2107284344] 'agreement among raft nodes before linearized reading'  (duration: 102.958741ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T22:36:24.244318Z","caller":"traceutil/trace.go:171","msg":"trace[1180422457] linearizableReadLoop","detail":"{readStateIndex:1703; appliedIndex:1702; }","duration":"197.090968ms","start":"2024-12-09T22:36:24.047214Z","end":"2024-12-09T22:36:24.244304Z","steps":["trace[1180422457] 'read index received'  (duration: 196.933384ms)","trace[1180422457] 'applied index is now lower than readState.Index'  (duration: 156.98µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T22:36:24.244397Z","caller":"traceutil/trace.go:171","msg":"trace[239729648] transaction","detail":"{read_only:false; response_revision:1638; number_of_response:1; }","duration":"210.441458ms","start":"2024-12-09T22:36:24.033942Z","end":"2024-12-09T22:36:24.244383Z","steps":["trace[239729648] 'process raft request'  (duration: 210.226298ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T22:36:24.244431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.204147ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T22:36:24.244451Z","caller":"traceutil/trace.go:171","msg":"trace[317081139] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1638; }","duration":"197.23819ms","start":"2024-12-09T22:36:24.047207Z","end":"2024-12-09T22:36:24.244446Z","steps":["trace[317081139] 'agreement among raft nodes before linearized reading'  (duration: 197.170923ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T22:36:50.834752Z","caller":"traceutil/trace.go:171","msg":"trace[442177980] linearizableReadLoop","detail":"{readStateIndex:1915; appliedIndex:1914; }","duration":"201.753591ms","start":"2024-12-09T22:36:50.632985Z","end":"2024-12-09T22:36:50.834738Z","steps":["trace[442177980] 'read index received'  (duration: 201.627655ms)","trace[442177980] 'applied index is now lower than readState.Index'  (duration: 125.515µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T22:36:50.834860Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.874237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/external-resizer-runner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T22:36:50.834879Z","caller":"traceutil/trace.go:171","msg":"trace[506953096] range","detail":"{range_begin:/registry/clusterroles/external-resizer-runner; range_end:; response_count:0; response_revision:1836; }","duration":"201.911043ms","start":"2024-12-09T22:36:50.632963Z","end":"2024-12-09T22:36:50.834874Z","steps":["trace[506953096] 'agreement among raft nodes before linearized reading'  (duration: 201.832636ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T22:36:50.835029Z","caller":"traceutil/trace.go:171","msg":"trace[253249298] transaction","detail":"{read_only:false; response_revision:1836; number_of_response:1; }","duration":"366.879696ms","start":"2024-12-09T22:36:50.468137Z","end":"2024-12-09T22:36:50.835017Z","steps":["trace[253249298] 'process raft request'  (duration: 366.516212ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T22:36:50.835111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T22:36:50.468122Z","time spent":"366.939864ms","remote":"127.0.0.1:34280","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1828 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 22:41:51 up 9 min,  0 users,  load average: 0.06, 0.42, 0.36
	Linux addons-495659 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0a7dd6f001e51ad124c84fe2a42366e3099006ba1066210ce06a5f3b8efbde72] <==
	E1209 22:34:49.332523       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.45.9:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.45.9:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.45.9:443: connect: connection refused" logger="UnhandledError"
	E1209 22:34:49.334515       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.45.9:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.45.9:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.45.9:443: connect: connection refused" logger="UnhandledError"
	E1209 22:34:49.339449       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.45.9:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.45.9:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.45.9:443: connect: connection refused" logger="UnhandledError"
	I1209 22:34:49.408933       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1209 22:35:35.444988       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:45530: use of closed network connection
	E1209 22:35:35.623097       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:45564: use of closed network connection
	I1209 22:35:44.708487       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.31.146"}
	I1209 22:36:04.882928       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1209 22:36:05.066047       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.176.249"}
	I1209 22:36:09.719969       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1209 22:36:10.750017       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1209 22:36:31.953190       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1209 22:36:46.514310       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 22:36:46.514369       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 22:36:46.556592       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 22:36:46.556639       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 22:36:46.575888       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 22:36:46.576476       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 22:36:46.604405       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 22:36:46.604496       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1209 22:36:47.557777       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1209 22:36:47.605190       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1209 22:36:47.702157       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1209 22:38:24.207303       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.254.205"}
	E1209 22:38:28.911094       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [3ce76bec56eb80fa49b8c1092f1e55bdc74f7b4935dee8926d573017905920b9] <==
	E1209 22:39:42.566713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:39:54.439336       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:39:54.439394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:40:05.850820       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:40:05.850949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:40:27.387061       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:40:27.387185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:40:30.412113       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:40:30.412250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:40:40.689634       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:40:40.689752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:40:50.466836       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:40:50.466998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:41:01.257111       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:41:01.257157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:41:10.300822       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:41:10.301002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:41:18.118630       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:41:18.118746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:41:31.303079       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:41:31.303172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:41:42.240795       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:41:42.240947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 22:41:42.970816       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 22:41:42.970867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [03167612b8d4639d3c81cda95c8fbead487c49cf15082d5142d96d88f1dc0eea] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 22:33:06.610078       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 22:33:06.649856       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.123"]
	E1209 22:33:06.650053       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 22:33:06.756286       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 22:33:06.756316       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 22:33:06.756342       1 server_linux.go:169] "Using iptables Proxier"
	I1209 22:33:06.769170       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 22:33:06.769425       1 server.go:483] "Version info" version="v1.31.2"
	I1209 22:33:06.769473       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 22:33:06.775071       1 config.go:199] "Starting service config controller"
	I1209 22:33:06.775086       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 22:33:06.775116       1 config.go:105] "Starting endpoint slice config controller"
	I1209 22:33:06.775120       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 22:33:06.775556       1 config.go:328] "Starting node config controller"
	I1209 22:33:06.775564       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 22:33:06.888291       1 shared_informer.go:320] Caches are synced for node config
	I1209 22:33:06.888335       1 shared_informer.go:320] Caches are synced for service config
	I1209 22:33:06.888393       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [69519752b978b9976190d6dbab0e222cdb92b7949b3ee72cb729c02468bd5afb] <==
	W1209 22:32:57.446670       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1209 22:32:57.446776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.480981       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1209 22:32:57.481086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.533169       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 22:32:57.533324       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1209 22:32:57.603548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1209 22:32:57.603814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.628473       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1209 22:32:57.628580       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.658486       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1209 22:32:57.658642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.750668       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 22:32:57.750800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.801744       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 22:32:57.802054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.815374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1209 22:32:57.815465       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.826887       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1209 22:32:57.826986       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.908281       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1209 22:32:57.908334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 22:32:57.934772       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 22:32:57.934968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1209 22:32:59.710371       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 22:40:29 addons-495659 kubelet[1207]: E1209 22:40:29.526023    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784029525515842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:40:34 addons-495659 kubelet[1207]: I1209 22:40:34.221251    1207 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 22:40:39 addons-495659 kubelet[1207]: E1209 22:40:39.528171    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784039527681887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:40:39 addons-495659 kubelet[1207]: E1209 22:40:39.528205    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784039527681887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:40:49 addons-495659 kubelet[1207]: E1209 22:40:49.531517    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784049531025379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:40:49 addons-495659 kubelet[1207]: E1209 22:40:49.531562    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784049531025379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:40:59 addons-495659 kubelet[1207]: E1209 22:40:59.240054    1207 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 09 22:40:59 addons-495659 kubelet[1207]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 22:40:59 addons-495659 kubelet[1207]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 22:40:59 addons-495659 kubelet[1207]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 22:40:59 addons-495659 kubelet[1207]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 22:40:59 addons-495659 kubelet[1207]: E1209 22:40:59.535686    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784059535138976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:40:59 addons-495659 kubelet[1207]: E1209 22:40:59.535719    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784059535138976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:41:09 addons-495659 kubelet[1207]: E1209 22:41:09.538197    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784069537802802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:41:09 addons-495659 kubelet[1207]: E1209 22:41:09.538479    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784069537802802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:41:15 addons-495659 kubelet[1207]: I1209 22:41:15.221766    1207 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-k9c92" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 22:41:19 addons-495659 kubelet[1207]: E1209 22:41:19.545382    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784079541420980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:41:19 addons-495659 kubelet[1207]: E1209 22:41:19.545762    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784079541420980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:41:29 addons-495659 kubelet[1207]: E1209 22:41:29.548123    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784089547746001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:41:29 addons-495659 kubelet[1207]: E1209 22:41:29.548151    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784089547746001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:41:37 addons-495659 kubelet[1207]: I1209 22:41:37.221427    1207 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 22:41:39 addons-495659 kubelet[1207]: E1209 22:41:39.551070    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784099550626573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:41:39 addons-495659 kubelet[1207]: E1209 22:41:39.551496    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784099550626573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:41:49 addons-495659 kubelet[1207]: E1209 22:41:49.554526    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784109554142666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:41:49 addons-495659 kubelet[1207]: E1209 22:41:49.554570    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784109554142666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [e0c23be9fca0fc7f6d765c3552106eab10700f65628eb7ccd437a3bcfa0d5b9a] <==
	I1209 22:33:12.333738       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 22:33:12.371657       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 22:33:12.371702       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 22:33:12.442821       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 22:33:12.458069       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2b4e2601-9884-4453-b8b1-6d90190db87b", APIVersion:"v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-495659_d64f94cb-e5cc-490e-bda7-7b4bf7f40e77 became leader
	I1209 22:33:12.472458       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-495659_d64f94cb-e5cc-490e-bda7-7b4bf7f40e77!
	I1209 22:33:12.572714       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-495659_d64f94cb-e5cc-490e-bda7-7b4bf7f40e77!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-495659 -n addons-495659
helpers_test.go:261: (dbg) Run:  kubectl --context addons-495659 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-495659 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (351.67s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-495659
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-495659: exit status 82 (2m0.458761752s)

                                                
                                                
-- stdout --
	* Stopping node "addons-495659"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-495659" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-495659
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-495659: exit status 11 (21.52885432s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.123:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-495659" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-495659
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-495659: exit status 11 (6.143197702s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.123:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-495659" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-495659
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-495659: exit status 11 (6.142847774s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.123:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-495659" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p functional-967202 image ls --format short --alsologtostderr: (2.251963683s)
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-967202 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-967202 image ls --format short --alsologtostderr:
I1209 22:48:40.830176   36385 out.go:345] Setting OutFile to fd 1 ...
I1209 22:48:40.830286   36385 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:48:40.830297   36385 out.go:358] Setting ErrFile to fd 2...
I1209 22:48:40.830303   36385 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:48:40.830506   36385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
I1209 22:48:40.831098   36385 config.go:182] Loaded profile config "functional-967202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 22:48:40.831190   36385 config.go:182] Loaded profile config "functional-967202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 22:48:40.831611   36385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 22:48:40.831655   36385 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 22:48:40.846467   36385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40165
I1209 22:48:40.846983   36385 main.go:141] libmachine: () Calling .GetVersion
I1209 22:48:40.847603   36385 main.go:141] libmachine: Using API Version  1
I1209 22:48:40.847630   36385 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 22:48:40.848067   36385 main.go:141] libmachine: () Calling .GetMachineName
I1209 22:48:40.848293   36385 main.go:141] libmachine: (functional-967202) Calling .GetState
I1209 22:48:40.850207   36385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 22:48:40.850248   36385 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 22:48:40.864963   36385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
I1209 22:48:40.865412   36385 main.go:141] libmachine: () Calling .GetVersion
I1209 22:48:40.865915   36385 main.go:141] libmachine: Using API Version  1
I1209 22:48:40.865936   36385 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 22:48:40.866277   36385 main.go:141] libmachine: () Calling .GetMachineName
I1209 22:48:40.866483   36385 main.go:141] libmachine: (functional-967202) Calling .DriverName
I1209 22:48:40.866678   36385 ssh_runner.go:195] Run: systemctl --version
I1209 22:48:40.866705   36385 main.go:141] libmachine: (functional-967202) Calling .GetSSHHostname
I1209 22:48:40.869815   36385 main.go:141] libmachine: (functional-967202) DBG | domain functional-967202 has defined MAC address 52:54:00:af:0a:5b in network mk-functional-967202
I1209 22:48:40.870271   36385 main.go:141] libmachine: (functional-967202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:0a:5b", ip: ""} in network mk-functional-967202: {Iface:virbr1 ExpiryTime:2024-12-09 23:45:32 +0000 UTC Type:0 Mac:52:54:00:af:0a:5b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:functional-967202 Clientid:01:52:54:00:af:0a:5b}
I1209 22:48:40.870310   36385 main.go:141] libmachine: (functional-967202) DBG | domain functional-967202 has defined IP address 192.168.50.72 and MAC address 52:54:00:af:0a:5b in network mk-functional-967202
I1209 22:48:40.870476   36385 main.go:141] libmachine: (functional-967202) Calling .GetSSHPort
I1209 22:48:40.870655   36385 main.go:141] libmachine: (functional-967202) Calling .GetSSHKeyPath
I1209 22:48:40.870792   36385 main.go:141] libmachine: (functional-967202) Calling .GetSSHUsername
I1209 22:48:40.870928   36385 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/functional-967202/id_rsa Username:docker}
I1209 22:48:40.969182   36385 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 22:48:43.032455   36385 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.06323839s)
W1209 22:48:43.032527   36385 cache_images.go:734] Failed to list images for profile functional-967202 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1209 22:48:43.002415    8764 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,},}"
time="2024-12-09T22:48:43Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
I1209 22:48:43.032571   36385 main.go:141] libmachine: Making call to close driver server
I1209 22:48:43.032586   36385 main.go:141] libmachine: (functional-967202) Calling .Close
I1209 22:48:43.032876   36385 main.go:141] libmachine: Successfully made call to close driver server
I1209 22:48:43.032896   36385 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 22:48:43.032911   36385 main.go:141] libmachine: Making call to close driver server
I1209 22:48:43.032902   36385 main.go:141] libmachine: (functional-967202) DBG | Closing plugin on server side
I1209 22:48:43.032919   36385 main.go:141] libmachine: (functional-967202) Calling .Close
I1209 22:48:43.033143   36385 main.go:141] libmachine: Successfully made call to close driver server
I1209 22:48:43.033174   36385 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 node stop m02 -v=7 --alsologtostderr
E1209 22:53:53.500552   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:54:34.461982   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:55:26.333316   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-920193 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.467493438s)

                                                
                                                
-- stdout --
	* Stopping node "ha-920193-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 22:53:36.714639   40830 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:53:36.714761   40830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:53:36.714769   40830 out.go:358] Setting ErrFile to fd 2...
	I1209 22:53:36.714774   40830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:53:36.714962   40830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 22:53:36.715189   40830 mustload.go:65] Loading cluster: ha-920193
	I1209 22:53:36.715624   40830 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:53:36.715646   40830 stop.go:39] StopHost: ha-920193-m02
	I1209 22:53:36.715994   40830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:53:36.716029   40830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:53:36.731595   40830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41729
	I1209 22:53:36.732106   40830 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:53:36.732765   40830 main.go:141] libmachine: Using API Version  1
	I1209 22:53:36.732789   40830 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:53:36.733155   40830 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:53:36.735638   40830 out.go:177] * Stopping node "ha-920193-m02"  ...
	I1209 22:53:36.736854   40830 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1209 22:53:36.736889   40830 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:53:36.737113   40830 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1209 22:53:36.737136   40830 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:53:36.739930   40830 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:53:36.740342   40830 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:53:36.740367   40830 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:53:36.740516   40830 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:53:36.740688   40830 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:53:36.740835   40830 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:53:36.740954   40830 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa Username:docker}
	I1209 22:53:36.827120   40830 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1209 22:53:36.880973   40830 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1209 22:53:36.936316   40830 main.go:141] libmachine: Stopping "ha-920193-m02"...
	I1209 22:53:36.936340   40830 main.go:141] libmachine: (ha-920193-m02) Calling .GetState
	I1209 22:53:36.937997   40830 main.go:141] libmachine: (ha-920193-m02) Calling .Stop
	I1209 22:53:36.941559   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 0/120
	I1209 22:53:37.943216   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 1/120
	I1209 22:53:38.944427   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 2/120
	I1209 22:53:39.946480   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 3/120
	I1209 22:53:40.947969   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 4/120
	I1209 22:53:41.950031   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 5/120
	I1209 22:53:42.951377   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 6/120
	I1209 22:53:43.952703   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 7/120
	I1209 22:53:44.954197   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 8/120
	I1209 22:53:45.955845   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 9/120
	I1209 22:53:46.958311   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 10/120
	I1209 22:53:47.959707   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 11/120
	I1209 22:53:48.962037   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 12/120
	I1209 22:53:49.963402   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 13/120
	I1209 22:53:50.964648   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 14/120
	I1209 22:53:51.966586   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 15/120
	I1209 22:53:52.967905   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 16/120
	I1209 22:53:53.969307   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 17/120
	I1209 22:53:54.970711   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 18/120
	I1209 22:53:55.971957   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 19/120
	I1209 22:53:56.974062   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 20/120
	I1209 22:53:57.975387   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 21/120
	I1209 22:53:58.976650   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 22/120
	I1209 22:53:59.977974   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 23/120
	I1209 22:54:00.979312   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 24/120
	I1209 22:54:01.981361   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 25/120
	I1209 22:54:02.982678   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 26/120
	I1209 22:54:03.984120   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 27/120
	I1209 22:54:04.985479   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 28/120
	I1209 22:54:05.986863   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 29/120
	I1209 22:54:06.988963   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 30/120
	I1209 22:54:07.990167   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 31/120
	I1209 22:54:08.991647   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 32/120
	I1209 22:54:09.992984   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 33/120
	I1209 22:54:10.994425   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 34/120
	I1209 22:54:11.996268   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 35/120
	I1209 22:54:12.997558   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 36/120
	I1209 22:54:13.998814   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 37/120
	I1209 22:54:15.000449   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 38/120
	I1209 22:54:16.001891   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 39/120
	I1209 22:54:17.003898   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 40/120
	I1209 22:54:18.006322   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 41/120
	I1209 22:54:19.007847   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 42/120
	I1209 22:54:20.010078   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 43/120
	I1209 22:54:21.011465   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 44/120
	I1209 22:54:22.013193   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 45/120
	I1209 22:54:23.014807   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 46/120
	I1209 22:54:24.016360   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 47/120
	I1209 22:54:25.018043   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 48/120
	I1209 22:54:26.019147   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 49/120
	I1209 22:54:27.021653   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 50/120
	I1209 22:54:28.023019   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 51/120
	I1209 22:54:29.024385   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 52/120
	I1209 22:54:30.026252   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 53/120
	I1209 22:54:31.027467   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 54/120
	I1209 22:54:32.029321   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 55/120
	I1209 22:54:33.030755   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 56/120
	I1209 22:54:34.032047   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 57/120
	I1209 22:54:35.033461   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 58/120
	I1209 22:54:36.034857   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 59/120
	I1209 22:54:37.037190   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 60/120
	I1209 22:54:38.038671   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 61/120
	I1209 22:54:39.040399   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 62/120
	I1209 22:54:40.042099   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 63/120
	I1209 22:54:41.043692   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 64/120
	I1209 22:54:42.045579   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 65/120
	I1209 22:54:43.047171   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 66/120
	I1209 22:54:44.048479   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 67/120
	I1209 22:54:45.050150   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 68/120
	I1209 22:54:46.051608   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 69/120
	I1209 22:54:47.053751   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 70/120
	I1209 22:54:48.055999   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 71/120
	I1209 22:54:49.058252   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 72/120
	I1209 22:54:50.059498   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 73/120
	I1209 22:54:51.061827   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 74/120
	I1209 22:54:52.063838   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 75/120
	I1209 22:54:53.066005   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 76/120
	I1209 22:54:54.067217   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 77/120
	I1209 22:54:55.068429   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 78/120
	I1209 22:54:56.069772   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 79/120
	I1209 22:54:57.071847   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 80/120
	I1209 22:54:58.073221   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 81/120
	I1209 22:54:59.074542   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 82/120
	I1209 22:55:00.076006   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 83/120
	I1209 22:55:01.078132   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 84/120
	I1209 22:55:02.079947   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 85/120
	I1209 22:55:03.081971   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 86/120
	I1209 22:55:04.083420   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 87/120
	I1209 22:55:05.084816   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 88/120
	I1209 22:55:06.086247   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 89/120
	I1209 22:55:07.088185   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 90/120
	I1209 22:55:08.089833   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 91/120
	I1209 22:55:09.091118   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 92/120
	I1209 22:55:10.092538   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 93/120
	I1209 22:55:11.094000   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 94/120
	I1209 22:55:12.095924   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 95/120
	I1209 22:55:13.097501   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 96/120
	I1209 22:55:14.099593   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 97/120
	I1209 22:55:15.100945   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 98/120
	I1209 22:55:16.102211   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 99/120
	I1209 22:55:17.104485   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 100/120
	I1209 22:55:18.106043   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 101/120
	I1209 22:55:19.107419   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 102/120
	I1209 22:55:20.108702   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 103/120
	I1209 22:55:21.110525   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 104/120
	I1209 22:55:22.112399   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 105/120
	I1209 22:55:23.113684   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 106/120
	I1209 22:55:24.115112   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 107/120
	I1209 22:55:25.116380   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 108/120
	I1209 22:55:26.117815   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 109/120
	I1209 22:55:27.120094   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 110/120
	I1209 22:55:28.122137   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 111/120
	I1209 22:55:29.123333   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 112/120
	I1209 22:55:30.124818   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 113/120
	I1209 22:55:31.127030   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 114/120
	I1209 22:55:32.129039   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 115/120
	I1209 22:55:33.130343   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 116/120
	I1209 22:55:34.132232   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 117/120
	I1209 22:55:35.134199   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 118/120
	I1209 22:55:36.135725   40830 main.go:141] libmachine: (ha-920193-m02) Waiting for machine to stop 119/120
	I1209 22:55:37.137110   40830 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1209 22:55:37.137345   40830 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-920193 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr: (18.822954109s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-920193 -n ha-920193
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 logs -n 25
E1209 22:55:56.384126   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-920193 logs -n 25: (1.285434897s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3609340860/001/cp-test_ha-920193-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193:/home/docker/cp-test_ha-920193-m03_ha-920193.txt                       |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193 sudo cat                                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m03_ha-920193.txt                                 |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m02:/home/docker/cp-test_ha-920193-m03_ha-920193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m02 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m03_ha-920193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04:/home/docker/cp-test_ha-920193-m03_ha-920193-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m04 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m03_ha-920193-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-920193 cp testdata/cp-test.txt                                                | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3609340860/001/cp-test_ha-920193-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193:/home/docker/cp-test_ha-920193-m04_ha-920193.txt                       |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193 sudo cat                                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m04_ha-920193.txt                                 |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m02:/home/docker/cp-test_ha-920193-m04_ha-920193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m02 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m04_ha-920193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03:/home/docker/cp-test_ha-920193-m04_ha-920193-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m03 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m04_ha-920193-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-920193 node stop m02 -v=7                                                     | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 22:49:03
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 22:49:03.145250   36778 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:49:03.145390   36778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:49:03.145399   36778 out.go:358] Setting ErrFile to fd 2...
	I1209 22:49:03.145404   36778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:49:03.145610   36778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 22:49:03.146205   36778 out.go:352] Setting JSON to false
	I1209 22:49:03.147113   36778 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5494,"bootTime":1733779049,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 22:49:03.147209   36778 start.go:139] virtualization: kvm guest
	I1209 22:49:03.149227   36778 out.go:177] * [ha-920193] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 22:49:03.150446   36778 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 22:49:03.150468   36778 notify.go:220] Checking for updates...
	I1209 22:49:03.152730   36778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 22:49:03.153842   36778 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:49:03.154957   36778 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:03.156087   36778 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 22:49:03.157179   36778 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 22:49:03.158417   36778 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 22:49:03.193867   36778 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 22:49:03.195030   36778 start.go:297] selected driver: kvm2
	I1209 22:49:03.195046   36778 start.go:901] validating driver "kvm2" against <nil>
	I1209 22:49:03.195060   36778 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 22:49:03.196334   36778 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:49:03.196484   36778 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 22:49:03.213595   36778 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 22:49:03.213648   36778 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 22:49:03.213994   36778 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 22:49:03.214030   36778 cni.go:84] Creating CNI manager for ""
	I1209 22:49:03.214072   36778 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1209 22:49:03.214085   36778 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 22:49:03.214141   36778 start.go:340] cluster config:
	{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1209 22:49:03.214261   36778 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:49:03.215829   36778 out.go:177] * Starting "ha-920193" primary control-plane node in "ha-920193" cluster
	I1209 22:49:03.216947   36778 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:49:03.216988   36778 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 22:49:03.217002   36778 cache.go:56] Caching tarball of preloaded images
	I1209 22:49:03.217077   36778 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 22:49:03.217091   36778 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 22:49:03.217507   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:49:03.217534   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json: {Name:mk69f8481a2f9361b3b46196caa6653a8d77a9fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:03.217729   36778 start.go:360] acquireMachinesLock for ha-920193: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 22:49:03.217779   36778 start.go:364] duration metric: took 30.111µs to acquireMachinesLock for "ha-920193"
	I1209 22:49:03.217805   36778 start.go:93] Provisioning new machine with config: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:49:03.217887   36778 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 22:49:03.219504   36778 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 22:49:03.219675   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:03.219709   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:03.234776   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39205
	I1209 22:49:03.235235   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:03.235843   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:03.235867   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:03.236261   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:03.236466   36778 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:49:03.236632   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:03.236794   36778 start.go:159] libmachine.API.Create for "ha-920193" (driver="kvm2")
	I1209 22:49:03.236821   36778 client.go:168] LocalClient.Create starting
	I1209 22:49:03.236862   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem
	I1209 22:49:03.236900   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:49:03.236922   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:49:03.237001   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem
	I1209 22:49:03.237033   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:49:03.237054   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:49:03.237078   36778 main.go:141] libmachine: Running pre-create checks...
	I1209 22:49:03.237090   36778 main.go:141] libmachine: (ha-920193) Calling .PreCreateCheck
	I1209 22:49:03.237426   36778 main.go:141] libmachine: (ha-920193) Calling .GetConfigRaw
	I1209 22:49:03.237793   36778 main.go:141] libmachine: Creating machine...
	I1209 22:49:03.237806   36778 main.go:141] libmachine: (ha-920193) Calling .Create
	I1209 22:49:03.237934   36778 main.go:141] libmachine: (ha-920193) Creating KVM machine...
	I1209 22:49:03.239483   36778 main.go:141] libmachine: (ha-920193) DBG | found existing default KVM network
	I1209 22:49:03.240340   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.240142   36801 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I1209 22:49:03.240365   36778 main.go:141] libmachine: (ha-920193) DBG | created network xml: 
	I1209 22:49:03.240393   36778 main.go:141] libmachine: (ha-920193) DBG | <network>
	I1209 22:49:03.240407   36778 main.go:141] libmachine: (ha-920193) DBG |   <name>mk-ha-920193</name>
	I1209 22:49:03.240417   36778 main.go:141] libmachine: (ha-920193) DBG |   <dns enable='no'/>
	I1209 22:49:03.240427   36778 main.go:141] libmachine: (ha-920193) DBG |   
	I1209 22:49:03.240438   36778 main.go:141] libmachine: (ha-920193) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1209 22:49:03.240454   36778 main.go:141] libmachine: (ha-920193) DBG |     <dhcp>
	I1209 22:49:03.240491   36778 main.go:141] libmachine: (ha-920193) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1209 22:49:03.240508   36778 main.go:141] libmachine: (ha-920193) DBG |     </dhcp>
	I1209 22:49:03.240522   36778 main.go:141] libmachine: (ha-920193) DBG |   </ip>
	I1209 22:49:03.240532   36778 main.go:141] libmachine: (ha-920193) DBG |   
	I1209 22:49:03.240542   36778 main.go:141] libmachine: (ha-920193) DBG | </network>
	I1209 22:49:03.240557   36778 main.go:141] libmachine: (ha-920193) DBG | 
	I1209 22:49:03.245903   36778 main.go:141] libmachine: (ha-920193) DBG | trying to create private KVM network mk-ha-920193 192.168.39.0/24...
	I1209 22:49:03.312870   36778 main.go:141] libmachine: (ha-920193) DBG | private KVM network mk-ha-920193 192.168.39.0/24 created
	I1209 22:49:03.312901   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.312803   36801 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:03.312925   36778 main.go:141] libmachine: (ha-920193) Setting up store path in /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193 ...
	I1209 22:49:03.312938   36778 main.go:141] libmachine: (ha-920193) Building disk image from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 22:49:03.312960   36778 main.go:141] libmachine: (ha-920193) Downloading /home/jenkins/minikube-integration/19888-18950/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 22:49:03.559720   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.559511   36801 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa...
	I1209 22:49:03.632777   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.632628   36801 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/ha-920193.rawdisk...
	I1209 22:49:03.632808   36778 main.go:141] libmachine: (ha-920193) DBG | Writing magic tar header
	I1209 22:49:03.632868   36778 main.go:141] libmachine: (ha-920193) DBG | Writing SSH key tar header
	I1209 22:49:03.632897   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.632735   36801 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193 ...
	I1209 22:49:03.632914   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193 (perms=drwx------)
	I1209 22:49:03.632931   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines (perms=drwxr-xr-x)
	I1209 22:49:03.632938   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube (perms=drwxr-xr-x)
	I1209 22:49:03.632951   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950 (perms=drwxrwxr-x)
	I1209 22:49:03.632959   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 22:49:03.632968   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 22:49:03.632988   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193
	I1209 22:49:03.632996   36778 main.go:141] libmachine: (ha-920193) Creating domain...
	I1209 22:49:03.633013   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines
	I1209 22:49:03.633026   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:03.633034   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950
	I1209 22:49:03.633039   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 22:49:03.633046   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins
	I1209 22:49:03.633051   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home
	I1209 22:49:03.633058   36778 main.go:141] libmachine: (ha-920193) DBG | Skipping /home - not owner
	I1209 22:49:03.634033   36778 main.go:141] libmachine: (ha-920193) define libvirt domain using xml: 
	I1209 22:49:03.634053   36778 main.go:141] libmachine: (ha-920193) <domain type='kvm'>
	I1209 22:49:03.634063   36778 main.go:141] libmachine: (ha-920193)   <name>ha-920193</name>
	I1209 22:49:03.634077   36778 main.go:141] libmachine: (ha-920193)   <memory unit='MiB'>2200</memory>
	I1209 22:49:03.634087   36778 main.go:141] libmachine: (ha-920193)   <vcpu>2</vcpu>
	I1209 22:49:03.634099   36778 main.go:141] libmachine: (ha-920193)   <features>
	I1209 22:49:03.634108   36778 main.go:141] libmachine: (ha-920193)     <acpi/>
	I1209 22:49:03.634117   36778 main.go:141] libmachine: (ha-920193)     <apic/>
	I1209 22:49:03.634126   36778 main.go:141] libmachine: (ha-920193)     <pae/>
	I1209 22:49:03.634143   36778 main.go:141] libmachine: (ha-920193)     
	I1209 22:49:03.634155   36778 main.go:141] libmachine: (ha-920193)   </features>
	I1209 22:49:03.634163   36778 main.go:141] libmachine: (ha-920193)   <cpu mode='host-passthrough'>
	I1209 22:49:03.634172   36778 main.go:141] libmachine: (ha-920193)   
	I1209 22:49:03.634184   36778 main.go:141] libmachine: (ha-920193)   </cpu>
	I1209 22:49:03.634192   36778 main.go:141] libmachine: (ha-920193)   <os>
	I1209 22:49:03.634200   36778 main.go:141] libmachine: (ha-920193)     <type>hvm</type>
	I1209 22:49:03.634209   36778 main.go:141] libmachine: (ha-920193)     <boot dev='cdrom'/>
	I1209 22:49:03.634217   36778 main.go:141] libmachine: (ha-920193)     <boot dev='hd'/>
	I1209 22:49:03.634226   36778 main.go:141] libmachine: (ha-920193)     <bootmenu enable='no'/>
	I1209 22:49:03.634233   36778 main.go:141] libmachine: (ha-920193)   </os>
	I1209 22:49:03.634241   36778 main.go:141] libmachine: (ha-920193)   <devices>
	I1209 22:49:03.634250   36778 main.go:141] libmachine: (ha-920193)     <disk type='file' device='cdrom'>
	I1209 22:49:03.634279   36778 main.go:141] libmachine: (ha-920193)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/boot2docker.iso'/>
	I1209 22:49:03.634301   36778 main.go:141] libmachine: (ha-920193)       <target dev='hdc' bus='scsi'/>
	I1209 22:49:03.634316   36778 main.go:141] libmachine: (ha-920193)       <readonly/>
	I1209 22:49:03.634323   36778 main.go:141] libmachine: (ha-920193)     </disk>
	I1209 22:49:03.634332   36778 main.go:141] libmachine: (ha-920193)     <disk type='file' device='disk'>
	I1209 22:49:03.634344   36778 main.go:141] libmachine: (ha-920193)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 22:49:03.634359   36778 main.go:141] libmachine: (ha-920193)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/ha-920193.rawdisk'/>
	I1209 22:49:03.634367   36778 main.go:141] libmachine: (ha-920193)       <target dev='hda' bus='virtio'/>
	I1209 22:49:03.634375   36778 main.go:141] libmachine: (ha-920193)     </disk>
	I1209 22:49:03.634383   36778 main.go:141] libmachine: (ha-920193)     <interface type='network'>
	I1209 22:49:03.634391   36778 main.go:141] libmachine: (ha-920193)       <source network='mk-ha-920193'/>
	I1209 22:49:03.634409   36778 main.go:141] libmachine: (ha-920193)       <model type='virtio'/>
	I1209 22:49:03.634421   36778 main.go:141] libmachine: (ha-920193)     </interface>
	I1209 22:49:03.634431   36778 main.go:141] libmachine: (ha-920193)     <interface type='network'>
	I1209 22:49:03.634442   36778 main.go:141] libmachine: (ha-920193)       <source network='default'/>
	I1209 22:49:03.634452   36778 main.go:141] libmachine: (ha-920193)       <model type='virtio'/>
	I1209 22:49:03.634463   36778 main.go:141] libmachine: (ha-920193)     </interface>
	I1209 22:49:03.634473   36778 main.go:141] libmachine: (ha-920193)     <serial type='pty'>
	I1209 22:49:03.634484   36778 main.go:141] libmachine: (ha-920193)       <target port='0'/>
	I1209 22:49:03.634498   36778 main.go:141] libmachine: (ha-920193)     </serial>
	I1209 22:49:03.634535   36778 main.go:141] libmachine: (ha-920193)     <console type='pty'>
	I1209 22:49:03.634561   36778 main.go:141] libmachine: (ha-920193)       <target type='serial' port='0'/>
	I1209 22:49:03.634581   36778 main.go:141] libmachine: (ha-920193)     </console>
	I1209 22:49:03.634592   36778 main.go:141] libmachine: (ha-920193)     <rng model='virtio'>
	I1209 22:49:03.634601   36778 main.go:141] libmachine: (ha-920193)       <backend model='random'>/dev/random</backend>
	I1209 22:49:03.634611   36778 main.go:141] libmachine: (ha-920193)     </rng>
	I1209 22:49:03.634621   36778 main.go:141] libmachine: (ha-920193)     
	I1209 22:49:03.634629   36778 main.go:141] libmachine: (ha-920193)     
	I1209 22:49:03.634634   36778 main.go:141] libmachine: (ha-920193)   </devices>
	I1209 22:49:03.634641   36778 main.go:141] libmachine: (ha-920193) </domain>
	I1209 22:49:03.634660   36778 main.go:141] libmachine: (ha-920193) 
	I1209 22:49:03.638977   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:88:5b:26 in network default
	I1209 22:49:03.639478   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:03.639517   36778 main.go:141] libmachine: (ha-920193) Ensuring networks are active...
	I1209 22:49:03.640151   36778 main.go:141] libmachine: (ha-920193) Ensuring network default is active
	I1209 22:49:03.640468   36778 main.go:141] libmachine: (ha-920193) Ensuring network mk-ha-920193 is active
	I1209 22:49:03.640970   36778 main.go:141] libmachine: (ha-920193) Getting domain xml...
	I1209 22:49:03.641682   36778 main.go:141] libmachine: (ha-920193) Creating domain...
	I1209 22:49:04.829698   36778 main.go:141] libmachine: (ha-920193) Waiting to get IP...
	I1209 22:49:04.830434   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:04.830835   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:04.830867   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:04.830824   36801 retry.go:31] will retry after 207.081791ms: waiting for machine to come up
	I1209 22:49:05.039144   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:05.039519   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:05.039585   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:05.039471   36801 retry.go:31] will retry after 281.967291ms: waiting for machine to come up
	I1209 22:49:05.322964   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:05.323366   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:05.323382   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:05.323322   36801 retry.go:31] will retry after 481.505756ms: waiting for machine to come up
	I1209 22:49:05.805961   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:05.806356   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:05.806376   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:05.806314   36801 retry.go:31] will retry after 549.592497ms: waiting for machine to come up
	I1209 22:49:06.357773   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:06.358284   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:06.358319   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:06.358243   36801 retry.go:31] will retry after 535.906392ms: waiting for machine to come up
	I1209 22:49:06.896232   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:06.896608   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:06.896631   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:06.896560   36801 retry.go:31] will retry after 874.489459ms: waiting for machine to come up
	I1209 22:49:07.772350   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:07.772754   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:07.772787   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:07.772706   36801 retry.go:31] will retry after 1.162571844s: waiting for machine to come up
	I1209 22:49:08.936520   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:08.936889   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:08.936917   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:08.936873   36801 retry.go:31] will retry after 1.45755084s: waiting for machine to come up
	I1209 22:49:10.396453   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:10.396871   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:10.396892   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:10.396843   36801 retry.go:31] will retry after 1.609479332s: waiting for machine to come up
	I1209 22:49:12.008693   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:12.009140   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:12.009166   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:12.009087   36801 retry.go:31] will retry after 2.268363531s: waiting for machine to come up
	I1209 22:49:14.279389   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:14.279856   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:14.279912   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:14.279851   36801 retry.go:31] will retry after 2.675009942s: waiting for machine to come up
	I1209 22:49:16.957696   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:16.958066   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:16.958096   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:16.958013   36801 retry.go:31] will retry after 2.665510056s: waiting for machine to come up
	I1209 22:49:19.624784   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:19.625187   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:19.625202   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:19.625166   36801 retry.go:31] will retry after 2.857667417s: waiting for machine to come up
	I1209 22:49:22.486137   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:22.486540   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:22.486563   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:22.486493   36801 retry.go:31] will retry after 4.026256687s: waiting for machine to come up
	I1209 22:49:26.516409   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.516832   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has current primary IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.516858   36778 main.go:141] libmachine: (ha-920193) Found IP for machine: 192.168.39.102
	I1209 22:49:26.516892   36778 main.go:141] libmachine: (ha-920193) Reserving static IP address...
	I1209 22:49:26.517220   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find host DHCP lease matching {name: "ha-920193", mac: "52:54:00:eb:3c:cb", ip: "192.168.39.102"} in network mk-ha-920193
	I1209 22:49:26.587512   36778 main.go:141] libmachine: (ha-920193) DBG | Getting to WaitForSSH function...
	I1209 22:49:26.587538   36778 main.go:141] libmachine: (ha-920193) Reserved static IP address: 192.168.39.102
	I1209 22:49:26.587551   36778 main.go:141] libmachine: (ha-920193) Waiting for SSH to be available...
	I1209 22:49:26.589724   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.590056   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:minikube Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:26.590080   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.590252   36778 main.go:141] libmachine: (ha-920193) DBG | Using SSH client type: external
	I1209 22:49:26.590281   36778 main.go:141] libmachine: (ha-920193) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa (-rw-------)
	I1209 22:49:26.590312   36778 main.go:141] libmachine: (ha-920193) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 22:49:26.590335   36778 main.go:141] libmachine: (ha-920193) DBG | About to run SSH command:
	I1209 22:49:26.590368   36778 main.go:141] libmachine: (ha-920193) DBG | exit 0
	I1209 22:49:26.707404   36778 main.go:141] libmachine: (ha-920193) DBG | SSH cmd err, output: <nil>: 
	I1209 22:49:26.707687   36778 main.go:141] libmachine: (ha-920193) KVM machine creation complete!
	I1209 22:49:26.708024   36778 main.go:141] libmachine: (ha-920193) Calling .GetConfigRaw
	I1209 22:49:26.708523   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:26.708739   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:26.708918   36778 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 22:49:26.708931   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:49:26.710113   36778 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 22:49:26.710125   36778 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 22:49:26.710130   36778 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 22:49:26.710135   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:26.712426   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.712765   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:26.712791   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.712925   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:26.713081   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.713185   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.713306   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:26.713452   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:26.713680   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:26.713692   36778 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 22:49:26.806695   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:49:26.806717   36778 main.go:141] libmachine: Detecting the provisioner...
	I1209 22:49:26.806725   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:26.809366   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.809767   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:26.809800   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.809958   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:26.810141   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.810311   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.810444   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:26.810627   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:26.810776   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:26.810787   36778 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 22:49:26.908040   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 22:49:26.908090   36778 main.go:141] libmachine: found compatible host: buildroot
	I1209 22:49:26.908097   36778 main.go:141] libmachine: Provisioning with buildroot...
	I1209 22:49:26.908104   36778 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:49:26.908364   36778 buildroot.go:166] provisioning hostname "ha-920193"
	I1209 22:49:26.908392   36778 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:49:26.908590   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:26.911118   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.911513   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:26.911538   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.911715   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:26.911868   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.911989   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.912100   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:26.912224   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:26.912420   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:26.912438   36778 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-920193 && echo "ha-920193" | sudo tee /etc/hostname
	I1209 22:49:27.020773   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-920193
	
	I1209 22:49:27.020799   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.023575   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.023846   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.023871   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.024029   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.024220   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.024374   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.024530   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.024691   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:27.024872   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:27.024888   36778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-920193' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-920193/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-920193' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 22:49:27.127613   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:49:27.127642   36778 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 22:49:27.127660   36778 buildroot.go:174] setting up certificates
	I1209 22:49:27.127691   36778 provision.go:84] configureAuth start
	I1209 22:49:27.127710   36778 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:49:27.127961   36778 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:49:27.130248   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.130591   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.130619   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.130738   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.132923   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.133247   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.133271   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.133422   36778 provision.go:143] copyHostCerts
	I1209 22:49:27.133461   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:49:27.133491   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 22:49:27.133506   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:49:27.133573   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 22:49:27.133653   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:49:27.133670   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 22:49:27.133677   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:49:27.133702   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 22:49:27.133745   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:49:27.133761   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 22:49:27.133767   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:49:27.133788   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 22:49:27.133835   36778 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.ha-920193 san=[127.0.0.1 192.168.39.102 ha-920193 localhost minikube]
	I1209 22:49:27.297434   36778 provision.go:177] copyRemoteCerts
	I1209 22:49:27.297494   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 22:49:27.297515   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.300069   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.300424   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.300443   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.300615   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.300792   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.300928   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.301029   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:27.378773   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 22:49:27.378830   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1209 22:49:27.403553   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 22:49:27.403627   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 22:49:27.425459   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 22:49:27.425526   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 22:49:27.449197   36778 provision.go:87] duration metric: took 321.487984ms to configureAuth
	I1209 22:49:27.449229   36778 buildroot.go:189] setting minikube options for container-runtime
	I1209 22:49:27.449449   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:49:27.449534   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.453191   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.453559   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.453595   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.453759   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.453939   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.454070   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.454184   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.454317   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:27.454513   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:27.454534   36778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 22:49:27.653703   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 22:49:27.653733   36778 main.go:141] libmachine: Checking connection to Docker...
	I1209 22:49:27.653756   36778 main.go:141] libmachine: (ha-920193) Calling .GetURL
	I1209 22:49:27.655032   36778 main.go:141] libmachine: (ha-920193) DBG | Using libvirt version 6000000
	I1209 22:49:27.657160   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.657463   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.657491   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.657682   36778 main.go:141] libmachine: Docker is up and running!
	I1209 22:49:27.657699   36778 main.go:141] libmachine: Reticulating splines...
	I1209 22:49:27.657708   36778 client.go:171] duration metric: took 24.420875377s to LocalClient.Create
	I1209 22:49:27.657735   36778 start.go:167] duration metric: took 24.420942176s to libmachine.API.Create "ha-920193"
	I1209 22:49:27.657747   36778 start.go:293] postStartSetup for "ha-920193" (driver="kvm2")
	I1209 22:49:27.657761   36778 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 22:49:27.657785   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.657983   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 22:49:27.658006   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.659917   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.660172   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.660200   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.660370   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.660519   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.660646   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.660782   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:27.737935   36778 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 22:49:27.741969   36778 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 22:49:27.741998   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 22:49:27.742081   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 22:49:27.742178   36778 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 22:49:27.742190   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /etc/ssl/certs/262532.pem
	I1209 22:49:27.742316   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 22:49:27.752769   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:49:27.776187   36778 start.go:296] duration metric: took 118.424893ms for postStartSetup
	I1209 22:49:27.776233   36778 main.go:141] libmachine: (ha-920193) Calling .GetConfigRaw
	I1209 22:49:27.776813   36778 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:49:27.779433   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.779777   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.779809   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.780018   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:49:27.780196   36778 start.go:128] duration metric: took 24.562298059s to createHost
	I1209 22:49:27.780219   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.782389   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.782713   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.782737   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.782928   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.783093   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.783255   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.783378   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.783531   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:27.783762   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:27.783780   36778 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 22:49:27.880035   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733784567.857266275
	
	I1209 22:49:27.880058   36778 fix.go:216] guest clock: 1733784567.857266275
	I1209 22:49:27.880065   36778 fix.go:229] Guest: 2024-12-09 22:49:27.857266275 +0000 UTC Remote: 2024-12-09 22:49:27.780207864 +0000 UTC m=+24.672894470 (delta=77.058411ms)
	I1209 22:49:27.880084   36778 fix.go:200] guest clock delta is within tolerance: 77.058411ms
	I1209 22:49:27.880088   36778 start.go:83] releasing machines lock for "ha-920193", held for 24.662297943s
	I1209 22:49:27.880110   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.880381   36778 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:49:27.883090   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.883418   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.883452   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.883630   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.884081   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.884211   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.884272   36778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 22:49:27.884329   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.884381   36778 ssh_runner.go:195] Run: cat /version.json
	I1209 22:49:27.884403   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.886622   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.886872   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.886899   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.886994   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.887039   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.887207   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.887321   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.887333   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.887353   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.887479   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:27.887529   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.887692   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.887829   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.887976   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:27.963462   36778 ssh_runner.go:195] Run: systemctl --version
	I1209 22:49:27.986028   36778 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 22:49:28.143161   36778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 22:49:28.149221   36778 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 22:49:28.149289   36778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 22:49:28.165410   36778 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 22:49:28.165442   36778 start.go:495] detecting cgroup driver to use...
	I1209 22:49:28.165509   36778 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 22:49:28.181384   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 22:49:28.195011   36778 docker.go:217] disabling cri-docker service (if available) ...
	I1209 22:49:28.195063   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 22:49:28.208554   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 22:49:28.222230   36778 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 22:49:28.338093   36778 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 22:49:28.483809   36778 docker.go:233] disabling docker service ...
	I1209 22:49:28.483868   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 22:49:28.497723   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 22:49:28.510133   36778 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 22:49:28.637703   36778 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 22:49:28.768621   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 22:49:28.781961   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 22:49:28.799140   36778 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 22:49:28.799205   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.808634   36778 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 22:49:28.808697   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.818355   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.827780   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.837191   36778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 22:49:28.846758   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.856291   36778 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.872403   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.881716   36778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 22:49:28.890298   36778 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 22:49:28.890355   36778 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 22:49:28.902738   36778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 22:49:28.911729   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:49:29.013922   36778 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 22:49:29.106638   36778 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 22:49:29.106719   36778 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 22:49:29.111193   36778 start.go:563] Will wait 60s for crictl version
	I1209 22:49:29.111261   36778 ssh_runner.go:195] Run: which crictl
	I1209 22:49:29.115298   36778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 22:49:29.151109   36778 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 22:49:29.151178   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:49:29.178245   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:49:29.206246   36778 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 22:49:29.207478   36778 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:49:29.209787   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:29.210134   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:29.210160   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:29.210332   36778 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 22:49:29.214243   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:49:29.226620   36778 kubeadm.go:883] updating cluster {Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 22:49:29.226723   36778 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:49:29.226766   36778 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 22:49:29.257928   36778 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 22:49:29.257999   36778 ssh_runner.go:195] Run: which lz4
	I1209 22:49:29.261848   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1209 22:49:29.261955   36778 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 22:49:29.265782   36778 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 22:49:29.265814   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 22:49:30.441006   36778 crio.go:462] duration metric: took 1.179084887s to copy over tarball
	I1209 22:49:30.441074   36778 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 22:49:32.468580   36778 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.027482243s)
	I1209 22:49:32.468624   36778 crio.go:469] duration metric: took 2.027585779s to extract the tarball
	I1209 22:49:32.468641   36778 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 22:49:32.505123   36778 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 22:49:32.547324   36778 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 22:49:32.547346   36778 cache_images.go:84] Images are preloaded, skipping loading
	I1209 22:49:32.547353   36778 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.31.2 crio true true} ...
	I1209 22:49:32.547438   36778 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-920193 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 22:49:32.547498   36778 ssh_runner.go:195] Run: crio config
	I1209 22:49:32.589945   36778 cni.go:84] Creating CNI manager for ""
	I1209 22:49:32.589970   36778 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 22:49:32.589982   36778 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 22:49:32.590011   36778 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-920193 NodeName:ha-920193 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 22:49:32.590137   36778 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-920193"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.102"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 22:49:32.590159   36778 kube-vip.go:115] generating kube-vip config ...
	I1209 22:49:32.590202   36778 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 22:49:32.605724   36778 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 22:49:32.605829   36778 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1209 22:49:32.605883   36778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 22:49:32.615285   36778 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 22:49:32.615345   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1209 22:49:32.624299   36778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1209 22:49:32.639876   36778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 22:49:32.656137   36778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1209 22:49:32.672494   36778 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1209 22:49:32.688039   36778 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 22:49:32.691843   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:49:32.703440   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:49:32.825661   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:49:32.842362   36778 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193 for IP: 192.168.39.102
	I1209 22:49:32.842387   36778 certs.go:194] generating shared ca certs ...
	I1209 22:49:32.842404   36778 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:32.842561   36778 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 22:49:32.842601   36778 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 22:49:32.842611   36778 certs.go:256] generating profile certs ...
	I1209 22:49:32.842674   36778 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key
	I1209 22:49:32.842693   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt with IP's: []
	I1209 22:49:32.980963   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt ...
	I1209 22:49:32.980992   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt: {Name:mkd9ec798303363f6538acfc05f1a5f04066e731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:32.981176   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key ...
	I1209 22:49:32.981188   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key: {Name:mk056f923a34783de09213845e376bddce6f3df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:32.981268   36778 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.ec574f19
	I1209 22:49:32.981285   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.ec574f19 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.254]
	I1209 22:49:33.242216   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.ec574f19 ...
	I1209 22:49:33.242250   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.ec574f19: {Name:mk7179026523f0b057d26b52e40a5885ad95d8ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:33.242434   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.ec574f19 ...
	I1209 22:49:33.242448   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.ec574f19: {Name:mk65609d82220269362f492c0a2d0cc4da8d1447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:33.242525   36778 certs.go:381] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.ec574f19 -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt
	I1209 22:49:33.242596   36778 certs.go:385] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.ec574f19 -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key
	I1209 22:49:33.242650   36778 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key
	I1209 22:49:33.242665   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt with IP's: []
	I1209 22:49:33.389277   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt ...
	I1209 22:49:33.389307   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt: {Name:mk8b70654b36de7093b054b1d0d39a95b39d45fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:33.389473   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key ...
	I1209 22:49:33.389485   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key: {Name:mk4ec3e3be54da03f1d1683c75f10f14c0904ad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:33.389559   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 22:49:33.389576   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 22:49:33.389587   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 22:49:33.389600   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 22:49:33.389610   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 22:49:33.389620   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 22:49:33.389632   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 22:49:33.389642   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 22:49:33.389693   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 22:49:33.389729   36778 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 22:49:33.389739   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 22:49:33.389758   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 22:49:33.389781   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 22:49:33.389801   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 22:49:33.389837   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:49:33.389863   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:49:33.389878   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem -> /usr/share/ca-certificates/26253.pem
	I1209 22:49:33.389890   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /usr/share/ca-certificates/262532.pem
	I1209 22:49:33.390445   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 22:49:33.414470   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 22:49:33.436920   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 22:49:33.458977   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 22:49:33.481846   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 22:49:33.503907   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 22:49:33.525852   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 22:49:33.548215   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 22:49:33.569802   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 22:49:33.602465   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 22:49:33.628007   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 22:49:33.653061   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 22:49:33.668632   36778 ssh_runner.go:195] Run: openssl version
	I1209 22:49:33.674257   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 22:49:33.684380   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 22:49:33.688650   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 22:49:33.688714   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 22:49:33.694036   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 22:49:33.704144   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 22:49:33.714060   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:49:33.718184   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:49:33.718227   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:49:33.723730   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 22:49:33.734203   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 22:49:33.744729   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 22:49:33.749033   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 22:49:33.749080   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 22:49:33.754563   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 22:49:33.764859   36778 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 22:49:33.768876   36778 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 22:49:33.768937   36778 kubeadm.go:392] StartCluster: {Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:49:33.769036   36778 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 22:49:33.769105   36778 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 22:49:33.804100   36778 cri.go:89] found id: ""
	I1209 22:49:33.804165   36778 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 22:49:33.814344   36778 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 22:49:33.824218   36778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 22:49:33.834084   36778 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 22:49:33.834106   36778 kubeadm.go:157] found existing configuration files:
	
	I1209 22:49:33.834157   36778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 22:49:33.843339   36778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 22:49:33.843379   36778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 22:49:33.853049   36778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 22:49:33.862222   36778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 22:49:33.862280   36778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 22:49:33.872041   36778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 22:49:33.881416   36778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 22:49:33.881475   36778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 22:49:33.891237   36778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 22:49:33.900609   36778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 22:49:33.900659   36778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 22:49:33.910089   36778 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 22:49:34.000063   36778 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 22:49:34.000183   36778 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 22:49:34.091544   36778 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 22:49:34.091739   36778 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 22:49:34.091892   36778 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 22:49:34.100090   36778 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 22:49:34.102871   36778 out.go:235]   - Generating certificates and keys ...
	I1209 22:49:34.103528   36778 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 22:49:34.103648   36778 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 22:49:34.284340   36778 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 22:49:34.462874   36778 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 22:49:34.647453   36778 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 22:49:34.787984   36778 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 22:49:35.020609   36778 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 22:49:35.020761   36778 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-920193 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I1209 22:49:35.078800   36778 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 22:49:35.078977   36778 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-920193 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I1209 22:49:35.150500   36778 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 22:49:35.230381   36778 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 22:49:35.499235   36778 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 22:49:35.499319   36778 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 22:49:35.912886   36778 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 22:49:36.241120   36778 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 22:49:36.405939   36778 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 22:49:36.604047   36778 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 22:49:36.814671   36778 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 22:49:36.815164   36778 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 22:49:36.818373   36778 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 22:49:36.820325   36778 out.go:235]   - Booting up control plane ...
	I1209 22:49:36.820430   36778 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 22:49:36.820522   36778 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 22:49:36.821468   36778 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 22:49:36.841330   36778 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 22:49:36.848308   36778 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 22:49:36.848421   36778 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 22:49:36.995410   36778 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 22:49:36.995535   36778 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 22:49:37.995683   36778 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001015441s
	I1209 22:49:37.995786   36778 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 22:49:43.754200   36778 kubeadm.go:310] [api-check] The API server is healthy after 5.761609039s
	I1209 22:49:43.767861   36778 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 22:49:43.785346   36778 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 22:49:43.810025   36778 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 22:49:43.810266   36778 kubeadm.go:310] [mark-control-plane] Marking the node ha-920193 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 22:49:43.821256   36778 kubeadm.go:310] [bootstrap-token] Using token: 72yxn0.qrsfcagkngfj4gxi
	I1209 22:49:43.822572   36778 out.go:235]   - Configuring RBAC rules ...
	I1209 22:49:43.822691   36778 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 22:49:43.832707   36778 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 22:49:43.844059   36778 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 22:49:43.846995   36778 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 22:49:43.849841   36778 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 22:49:43.856257   36778 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 22:49:44.160151   36778 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 22:49:44.591740   36778 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 22:49:45.161509   36778 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 22:49:45.162464   36778 kubeadm.go:310] 
	I1209 22:49:45.162543   36778 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 22:49:45.162552   36778 kubeadm.go:310] 
	I1209 22:49:45.162641   36778 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 22:49:45.162653   36778 kubeadm.go:310] 
	I1209 22:49:45.162689   36778 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 22:49:45.162763   36778 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 22:49:45.162845   36778 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 22:49:45.162856   36778 kubeadm.go:310] 
	I1209 22:49:45.162934   36778 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 22:49:45.162944   36778 kubeadm.go:310] 
	I1209 22:49:45.163005   36778 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 22:49:45.163016   36778 kubeadm.go:310] 
	I1209 22:49:45.163084   36778 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 22:49:45.163184   36778 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 22:49:45.163290   36778 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 22:49:45.163301   36778 kubeadm.go:310] 
	I1209 22:49:45.163412   36778 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 22:49:45.163482   36778 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 22:49:45.163488   36778 kubeadm.go:310] 
	I1209 22:49:45.163586   36778 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 72yxn0.qrsfcagkngfj4gxi \
	I1209 22:49:45.163727   36778 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b \
	I1209 22:49:45.163762   36778 kubeadm.go:310] 	--control-plane 
	I1209 22:49:45.163771   36778 kubeadm.go:310] 
	I1209 22:49:45.163891   36778 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 22:49:45.163902   36778 kubeadm.go:310] 
	I1209 22:49:45.164042   36778 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 72yxn0.qrsfcagkngfj4gxi \
	I1209 22:49:45.164198   36778 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b 
	I1209 22:49:45.164453   36778 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 22:49:45.164487   36778 cni.go:84] Creating CNI manager for ""
	I1209 22:49:45.164497   36778 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 22:49:45.166869   36778 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1209 22:49:45.168578   36778 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 22:49:45.173867   36778 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1209 22:49:45.173890   36778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 22:49:45.193577   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 22:49:45.540330   36778 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 22:49:45.540400   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:45.540429   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-920193 minikube.k8s.io/updated_at=2024_12_09T22_49_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=ha-920193 minikube.k8s.io/primary=true
	I1209 22:49:45.563713   36778 ops.go:34] apiserver oom_adj: -16
	I1209 22:49:45.755027   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:46.255384   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:46.755819   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:47.255436   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:47.755914   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:48.255404   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:48.755938   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:49.255745   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:49.346913   36778 kubeadm.go:1113] duration metric: took 3.806571287s to wait for elevateKubeSystemPrivileges
	I1209 22:49:49.346942   36778 kubeadm.go:394] duration metric: took 15.578011127s to StartCluster
	I1209 22:49:49.346958   36778 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:49.347032   36778 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:49:49.347686   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:49.347889   36778 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:49:49.347901   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 22:49:49.347912   36778 start.go:241] waiting for startup goroutines ...
	I1209 22:49:49.347916   36778 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 22:49:49.347997   36778 addons.go:69] Setting storage-provisioner=true in profile "ha-920193"
	I1209 22:49:49.348008   36778 addons.go:69] Setting default-storageclass=true in profile "ha-920193"
	I1209 22:49:49.348018   36778 addons.go:234] Setting addon storage-provisioner=true in "ha-920193"
	I1209 22:49:49.348025   36778 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-920193"
	I1209 22:49:49.348059   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:49:49.348092   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:49:49.348366   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.348401   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.348486   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.348504   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.364294   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41179
	I1209 22:49:49.364762   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40635
	I1209 22:49:49.364808   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.365192   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.365331   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.365359   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.365654   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.365671   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.365700   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.365855   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:49:49.366017   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.366436   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.366477   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.367841   36778 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:49:49.368072   36778 kapi.go:59] client config for ha-920193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key", CAFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c9a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 22:49:49.368506   36778 cert_rotation.go:140] Starting client certificate rotation controller
	I1209 22:49:49.368728   36778 addons.go:234] Setting addon default-storageclass=true in "ha-920193"
	I1209 22:49:49.368759   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:49:49.368995   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.369045   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.381548   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44341
	I1209 22:49:49.382048   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.382623   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.382650   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.382946   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.383123   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:49:49.384085   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35753
	I1209 22:49:49.384563   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.385002   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:49.385074   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.385099   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.385406   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.385869   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.385898   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.387093   36778 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 22:49:49.388363   36778 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 22:49:49.388378   36778 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 22:49:49.388396   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:49.391382   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:49.391959   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:49.391988   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:49.392168   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:49.392369   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:49.392529   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:49.392718   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:49.402583   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37505
	I1209 22:49:49.403101   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.403703   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.403733   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.404140   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.404327   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:49:49.406048   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:49.406246   36778 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 22:49:49.406264   36778 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 22:49:49.406283   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:49.409015   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:49.409417   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:49.409445   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:49.409566   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:49.409736   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:49.409906   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:49.410051   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:49.469421   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 22:49:49.523797   36778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 22:49:49.572493   36778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 22:49:49.935058   36778 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1209 22:49:50.246776   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.246808   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.246866   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.246889   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.247109   36778 main.go:141] libmachine: (ha-920193) DBG | Closing plugin on server side
	I1209 22:49:50.247126   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.247142   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.247149   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.247150   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.247161   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.247168   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.247161   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.247214   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.247452   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.247465   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.247474   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.247491   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.247524   36778 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 22:49:50.247539   36778 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 22:49:50.247452   36778 main.go:141] libmachine: (ha-920193) DBG | Closing plugin on server side
	I1209 22:49:50.247679   36778 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1209 22:49:50.247688   36778 round_trippers.go:469] Request Headers:
	I1209 22:49:50.247699   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:49:50.247705   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:49:50.258818   36778 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1209 22:49:50.259388   36778 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1209 22:49:50.259405   36778 round_trippers.go:469] Request Headers:
	I1209 22:49:50.259415   36778 round_trippers.go:473]     Content-Type: application/json
	I1209 22:49:50.259421   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:49:50.259427   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:49:50.263578   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:49:50.263947   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.263973   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.264222   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.264298   36778 main.go:141] libmachine: (ha-920193) DBG | Closing plugin on server side
	I1209 22:49:50.264309   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.266759   36778 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1209 22:49:50.268058   36778 addons.go:510] duration metric: took 920.142906ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 22:49:50.268097   36778 start.go:246] waiting for cluster config update ...
	I1209 22:49:50.268112   36778 start.go:255] writing updated cluster config ...
	I1209 22:49:50.269702   36778 out.go:201] 
	I1209 22:49:50.271046   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:49:50.271126   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:49:50.272711   36778 out.go:177] * Starting "ha-920193-m02" control-plane node in "ha-920193" cluster
	I1209 22:49:50.273838   36778 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:49:50.273861   36778 cache.go:56] Caching tarball of preloaded images
	I1209 22:49:50.273946   36778 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 22:49:50.273960   36778 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 22:49:50.274036   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:49:50.274220   36778 start.go:360] acquireMachinesLock for ha-920193-m02: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 22:49:50.274272   36778 start.go:364] duration metric: took 30.506µs to acquireMachinesLock for "ha-920193-m02"
	I1209 22:49:50.274296   36778 start.go:93] Provisioning new machine with config: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:49:50.274418   36778 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1209 22:49:50.275986   36778 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 22:49:50.276071   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:50.276101   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:50.290689   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42005
	I1209 22:49:50.291090   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:50.291624   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:50.291657   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:50.291974   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:50.292165   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetMachineName
	I1209 22:49:50.292290   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:49:50.292460   36778 start.go:159] libmachine.API.Create for "ha-920193" (driver="kvm2")
	I1209 22:49:50.292488   36778 client.go:168] LocalClient.Create starting
	I1209 22:49:50.292523   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem
	I1209 22:49:50.292562   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:49:50.292580   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:49:50.292650   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem
	I1209 22:49:50.292677   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:49:50.292694   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:49:50.292719   36778 main.go:141] libmachine: Running pre-create checks...
	I1209 22:49:50.292730   36778 main.go:141] libmachine: (ha-920193-m02) Calling .PreCreateCheck
	I1209 22:49:50.292863   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetConfigRaw
	I1209 22:49:50.293207   36778 main.go:141] libmachine: Creating machine...
	I1209 22:49:50.293220   36778 main.go:141] libmachine: (ha-920193-m02) Calling .Create
	I1209 22:49:50.293319   36778 main.go:141] libmachine: (ha-920193-m02) Creating KVM machine...
	I1209 22:49:50.294569   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found existing default KVM network
	I1209 22:49:50.294708   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found existing private KVM network mk-ha-920193
	I1209 22:49:50.294863   36778 main.go:141] libmachine: (ha-920193-m02) Setting up store path in /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02 ...
	I1209 22:49:50.294888   36778 main.go:141] libmachine: (ha-920193-m02) Building disk image from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 22:49:50.294937   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:50.294840   37166 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:50.295026   36778 main.go:141] libmachine: (ha-920193-m02) Downloading /home/jenkins/minikube-integration/19888-18950/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 22:49:50.540657   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:50.540505   37166 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa...
	I1209 22:49:50.636978   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:50.636881   37166 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/ha-920193-m02.rawdisk...
	I1209 22:49:50.637002   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Writing magic tar header
	I1209 22:49:50.637012   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Writing SSH key tar header
	I1209 22:49:50.637092   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:50.637012   37166 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02 ...
	I1209 22:49:50.637134   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02
	I1209 22:49:50.637167   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02 (perms=drwx------)
	I1209 22:49:50.637189   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines (perms=drwxr-xr-x)
	I1209 22:49:50.637210   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines
	I1209 22:49:50.637225   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube (perms=drwxr-xr-x)
	I1209 22:49:50.637240   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950 (perms=drwxrwxr-x)
	I1209 22:49:50.637251   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 22:49:50.637263   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 22:49:50.637274   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:50.637286   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950
	I1209 22:49:50.637298   36778 main.go:141] libmachine: (ha-920193-m02) Creating domain...
	I1209 22:49:50.637312   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 22:49:50.637321   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins
	I1209 22:49:50.637330   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home
	I1209 22:49:50.637341   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Skipping /home - not owner
	I1209 22:49:50.638225   36778 main.go:141] libmachine: (ha-920193-m02) define libvirt domain using xml: 
	I1209 22:49:50.638247   36778 main.go:141] libmachine: (ha-920193-m02) <domain type='kvm'>
	I1209 22:49:50.638255   36778 main.go:141] libmachine: (ha-920193-m02)   <name>ha-920193-m02</name>
	I1209 22:49:50.638263   36778 main.go:141] libmachine: (ha-920193-m02)   <memory unit='MiB'>2200</memory>
	I1209 22:49:50.638271   36778 main.go:141] libmachine: (ha-920193-m02)   <vcpu>2</vcpu>
	I1209 22:49:50.638284   36778 main.go:141] libmachine: (ha-920193-m02)   <features>
	I1209 22:49:50.638291   36778 main.go:141] libmachine: (ha-920193-m02)     <acpi/>
	I1209 22:49:50.638306   36778 main.go:141] libmachine: (ha-920193-m02)     <apic/>
	I1209 22:49:50.638319   36778 main.go:141] libmachine: (ha-920193-m02)     <pae/>
	I1209 22:49:50.638328   36778 main.go:141] libmachine: (ha-920193-m02)     
	I1209 22:49:50.638333   36778 main.go:141] libmachine: (ha-920193-m02)   </features>
	I1209 22:49:50.638340   36778 main.go:141] libmachine: (ha-920193-m02)   <cpu mode='host-passthrough'>
	I1209 22:49:50.638346   36778 main.go:141] libmachine: (ha-920193-m02)   
	I1209 22:49:50.638356   36778 main.go:141] libmachine: (ha-920193-m02)   </cpu>
	I1209 22:49:50.638364   36778 main.go:141] libmachine: (ha-920193-m02)   <os>
	I1209 22:49:50.638380   36778 main.go:141] libmachine: (ha-920193-m02)     <type>hvm</type>
	I1209 22:49:50.638393   36778 main.go:141] libmachine: (ha-920193-m02)     <boot dev='cdrom'/>
	I1209 22:49:50.638403   36778 main.go:141] libmachine: (ha-920193-m02)     <boot dev='hd'/>
	I1209 22:49:50.638426   36778 main.go:141] libmachine: (ha-920193-m02)     <bootmenu enable='no'/>
	I1209 22:49:50.638448   36778 main.go:141] libmachine: (ha-920193-m02)   </os>
	I1209 22:49:50.638464   36778 main.go:141] libmachine: (ha-920193-m02)   <devices>
	I1209 22:49:50.638475   36778 main.go:141] libmachine: (ha-920193-m02)     <disk type='file' device='cdrom'>
	I1209 22:49:50.638507   36778 main.go:141] libmachine: (ha-920193-m02)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/boot2docker.iso'/>
	I1209 22:49:50.638533   36778 main.go:141] libmachine: (ha-920193-m02)       <target dev='hdc' bus='scsi'/>
	I1209 22:49:50.638547   36778 main.go:141] libmachine: (ha-920193-m02)       <readonly/>
	I1209 22:49:50.638559   36778 main.go:141] libmachine: (ha-920193-m02)     </disk>
	I1209 22:49:50.638570   36778 main.go:141] libmachine: (ha-920193-m02)     <disk type='file' device='disk'>
	I1209 22:49:50.638583   36778 main.go:141] libmachine: (ha-920193-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 22:49:50.638601   36778 main.go:141] libmachine: (ha-920193-m02)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/ha-920193-m02.rawdisk'/>
	I1209 22:49:50.638612   36778 main.go:141] libmachine: (ha-920193-m02)       <target dev='hda' bus='virtio'/>
	I1209 22:49:50.638623   36778 main.go:141] libmachine: (ha-920193-m02)     </disk>
	I1209 22:49:50.638632   36778 main.go:141] libmachine: (ha-920193-m02)     <interface type='network'>
	I1209 22:49:50.638641   36778 main.go:141] libmachine: (ha-920193-m02)       <source network='mk-ha-920193'/>
	I1209 22:49:50.638652   36778 main.go:141] libmachine: (ha-920193-m02)       <model type='virtio'/>
	I1209 22:49:50.638661   36778 main.go:141] libmachine: (ha-920193-m02)     </interface>
	I1209 22:49:50.638672   36778 main.go:141] libmachine: (ha-920193-m02)     <interface type='network'>
	I1209 22:49:50.638680   36778 main.go:141] libmachine: (ha-920193-m02)       <source network='default'/>
	I1209 22:49:50.638690   36778 main.go:141] libmachine: (ha-920193-m02)       <model type='virtio'/>
	I1209 22:49:50.638708   36778 main.go:141] libmachine: (ha-920193-m02)     </interface>
	I1209 22:49:50.638726   36778 main.go:141] libmachine: (ha-920193-m02)     <serial type='pty'>
	I1209 22:49:50.638741   36778 main.go:141] libmachine: (ha-920193-m02)       <target port='0'/>
	I1209 22:49:50.638748   36778 main.go:141] libmachine: (ha-920193-m02)     </serial>
	I1209 22:49:50.638756   36778 main.go:141] libmachine: (ha-920193-m02)     <console type='pty'>
	I1209 22:49:50.638764   36778 main.go:141] libmachine: (ha-920193-m02)       <target type='serial' port='0'/>
	I1209 22:49:50.638775   36778 main.go:141] libmachine: (ha-920193-m02)     </console>
	I1209 22:49:50.638784   36778 main.go:141] libmachine: (ha-920193-m02)     <rng model='virtio'>
	I1209 22:49:50.638793   36778 main.go:141] libmachine: (ha-920193-m02)       <backend model='random'>/dev/random</backend>
	I1209 22:49:50.638807   36778 main.go:141] libmachine: (ha-920193-m02)     </rng>
	I1209 22:49:50.638821   36778 main.go:141] libmachine: (ha-920193-m02)     
	I1209 22:49:50.638836   36778 main.go:141] libmachine: (ha-920193-m02)     
	I1209 22:49:50.638854   36778 main.go:141] libmachine: (ha-920193-m02)   </devices>
	I1209 22:49:50.638870   36778 main.go:141] libmachine: (ha-920193-m02) </domain>
	I1209 22:49:50.638881   36778 main.go:141] libmachine: (ha-920193-m02) 
	I1209 22:49:50.645452   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:4e:0e:44 in network default
	I1209 22:49:50.646094   36778 main.go:141] libmachine: (ha-920193-m02) Ensuring networks are active...
	I1209 22:49:50.646118   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:50.646792   36778 main.go:141] libmachine: (ha-920193-m02) Ensuring network default is active
	I1209 22:49:50.647136   36778 main.go:141] libmachine: (ha-920193-m02) Ensuring network mk-ha-920193 is active
	I1209 22:49:50.647479   36778 main.go:141] libmachine: (ha-920193-m02) Getting domain xml...
	I1209 22:49:50.648166   36778 main.go:141] libmachine: (ha-920193-m02) Creating domain...
	I1209 22:49:51.846569   36778 main.go:141] libmachine: (ha-920193-m02) Waiting to get IP...
	I1209 22:49:51.847529   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:51.847984   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:51.848045   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:51.847987   37166 retry.go:31] will retry after 223.150886ms: waiting for machine to come up
	I1209 22:49:52.072674   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:52.073143   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:52.073214   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:52.073119   37166 retry.go:31] will retry after 342.157886ms: waiting for machine to come up
	I1209 22:49:52.416515   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:52.416911   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:52.416933   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:52.416873   37166 retry.go:31] will retry after 319.618715ms: waiting for machine to come up
	I1209 22:49:52.738511   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:52.739067   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:52.739096   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:52.739025   37166 retry.go:31] will retry after 426.813714ms: waiting for machine to come up
	I1209 22:49:53.167672   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:53.168111   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:53.168139   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:53.168063   37166 retry.go:31] will retry after 465.129361ms: waiting for machine to come up
	I1209 22:49:53.634495   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:53.635006   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:53.635033   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:53.634965   37166 retry.go:31] will retry after 688.228763ms: waiting for machine to come up
	I1209 22:49:54.324368   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:54.324751   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:54.324780   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:54.324706   37166 retry.go:31] will retry after 952.948713ms: waiting for machine to come up
	I1209 22:49:55.278732   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:55.279052   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:55.279084   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:55.279025   37166 retry.go:31] will retry after 1.032940312s: waiting for machine to come up
	I1209 22:49:56.313177   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:56.313589   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:56.313613   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:56.313562   37166 retry.go:31] will retry after 1.349167493s: waiting for machine to come up
	I1209 22:49:57.664618   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:57.665031   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:57.665060   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:57.664986   37166 retry.go:31] will retry after 1.512445541s: waiting for machine to come up
	I1209 22:49:59.179536   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:59.179914   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:59.179939   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:59.179864   37166 retry.go:31] will retry after 2.399970974s: waiting for machine to come up
	I1209 22:50:01.582227   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:01.582662   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:50:01.582690   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:50:01.582599   37166 retry.go:31] will retry after 2.728474301s: waiting for machine to come up
	I1209 22:50:04.312490   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:04.312880   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:50:04.312905   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:50:04.312847   37166 retry.go:31] will retry after 4.276505546s: waiting for machine to come up
	I1209 22:50:08.590485   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:08.590927   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:50:08.590949   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:50:08.590889   37166 retry.go:31] will retry after 4.29966265s: waiting for machine to come up
	I1209 22:50:12.892743   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:12.893228   36778 main.go:141] libmachine: (ha-920193-m02) Found IP for machine: 192.168.39.43
	I1209 22:50:12.893253   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has current primary IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:12.893261   36778 main.go:141] libmachine: (ha-920193-m02) Reserving static IP address...
	I1209 22:50:12.893598   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find host DHCP lease matching {name: "ha-920193-m02", mac: "52:54:00:e3:b9:61", ip: "192.168.39.43"} in network mk-ha-920193
	I1209 22:50:12.967208   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Getting to WaitForSSH function...
	I1209 22:50:12.967241   36778 main.go:141] libmachine: (ha-920193-m02) Reserved static IP address: 192.168.39.43
	I1209 22:50:12.967255   36778 main.go:141] libmachine: (ha-920193-m02) Waiting for SSH to be available...
	I1209 22:50:12.969615   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:12.969971   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:12.969998   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:12.970158   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Using SSH client type: external
	I1209 22:50:12.970180   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa (-rw-------)
	I1209 22:50:12.970211   36778 main.go:141] libmachine: (ha-920193-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 22:50:12.970224   36778 main.go:141] libmachine: (ha-920193-m02) DBG | About to run SSH command:
	I1209 22:50:12.970270   36778 main.go:141] libmachine: (ha-920193-m02) DBG | exit 0
	I1209 22:50:13.099696   36778 main.go:141] libmachine: (ha-920193-m02) DBG | SSH cmd err, output: <nil>: 
	I1209 22:50:13.100005   36778 main.go:141] libmachine: (ha-920193-m02) KVM machine creation complete!
	I1209 22:50:13.100244   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetConfigRaw
	I1209 22:50:13.100810   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:13.100988   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:13.101128   36778 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 22:50:13.101154   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetState
	I1209 22:50:13.102588   36778 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 22:50:13.102600   36778 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 22:50:13.102605   36778 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 22:50:13.102611   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.105041   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.105398   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.105421   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.105634   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.105791   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.105931   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.106034   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.106172   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.106381   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.106392   36778 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 22:50:13.214686   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:50:13.214707   36778 main.go:141] libmachine: Detecting the provisioner...
	I1209 22:50:13.214714   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.217518   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.217915   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.217939   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.218093   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.218249   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.218422   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.218594   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.218762   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.218925   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.218936   36778 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 22:50:13.328344   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 22:50:13.328426   36778 main.go:141] libmachine: found compatible host: buildroot
	I1209 22:50:13.328436   36778 main.go:141] libmachine: Provisioning with buildroot...
	I1209 22:50:13.328445   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetMachineName
	I1209 22:50:13.328699   36778 buildroot.go:166] provisioning hostname "ha-920193-m02"
	I1209 22:50:13.328724   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetMachineName
	I1209 22:50:13.328916   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.331720   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.332124   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.332160   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.332317   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.332518   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.332696   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.332887   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.333073   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.333230   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.333241   36778 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-920193-m02 && echo "ha-920193-m02" | sudo tee /etc/hostname
	I1209 22:50:13.453959   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-920193-m02
	
	I1209 22:50:13.453993   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.457007   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.457414   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.457445   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.457635   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.457816   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.457961   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.458096   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.458282   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.458465   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.458486   36778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-920193-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-920193-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-920193-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 22:50:13.575704   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:50:13.575734   36778 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 22:50:13.575756   36778 buildroot.go:174] setting up certificates
	I1209 22:50:13.575768   36778 provision.go:84] configureAuth start
	I1209 22:50:13.575777   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetMachineName
	I1209 22:50:13.576037   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetIP
	I1209 22:50:13.578662   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.579132   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.579159   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.579337   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.581290   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.581592   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.581613   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.581740   36778 provision.go:143] copyHostCerts
	I1209 22:50:13.581770   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:50:13.581820   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 22:50:13.581832   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:50:13.581924   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 22:50:13.582006   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:50:13.582026   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 22:50:13.582033   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:50:13.582058   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 22:50:13.582102   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:50:13.582122   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 22:50:13.582131   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:50:13.582166   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 22:50:13.582231   36778 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.ha-920193-m02 san=[127.0.0.1 192.168.39.43 ha-920193-m02 localhost minikube]
	I1209 22:50:13.756786   36778 provision.go:177] copyRemoteCerts
	I1209 22:50:13.756844   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 22:50:13.756875   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.759281   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.759620   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.759646   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.759868   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.760043   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.760166   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.760302   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa Username:docker}
	I1209 22:50:13.842746   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 22:50:13.842829   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 22:50:13.868488   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 22:50:13.868558   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 22:50:13.894237   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 22:50:13.894300   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 22:50:13.919207   36778 provision.go:87] duration metric: took 343.427038ms to configureAuth
	I1209 22:50:13.919237   36778 buildroot.go:189] setting minikube options for container-runtime
	I1209 22:50:13.919436   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:50:13.919529   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.922321   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.922667   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.922689   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.922943   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.923101   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.923227   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.923381   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.923527   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.923766   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.923783   36778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 22:50:14.145275   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 22:50:14.145304   36778 main.go:141] libmachine: Checking connection to Docker...
	I1209 22:50:14.145313   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetURL
	I1209 22:50:14.146583   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Using libvirt version 6000000
	I1209 22:50:14.148809   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.149140   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.149168   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.149302   36778 main.go:141] libmachine: Docker is up and running!
	I1209 22:50:14.149316   36778 main.go:141] libmachine: Reticulating splines...
	I1209 22:50:14.149322   36778 client.go:171] duration metric: took 23.856827848s to LocalClient.Create
	I1209 22:50:14.149351   36778 start.go:167] duration metric: took 23.856891761s to libmachine.API.Create "ha-920193"
	I1209 22:50:14.149370   36778 start.go:293] postStartSetup for "ha-920193-m02" (driver="kvm2")
	I1209 22:50:14.149387   36778 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 22:50:14.149412   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.149683   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 22:50:14.149706   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:14.152301   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.152593   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.152623   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.152758   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:14.152950   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.153102   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:14.153238   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa Username:docker}
	I1209 22:50:14.237586   36778 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 22:50:14.241320   36778 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 22:50:14.241353   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 22:50:14.241430   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 22:50:14.241512   36778 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 22:50:14.241522   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /etc/ssl/certs/262532.pem
	I1209 22:50:14.241599   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 22:50:14.250940   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:50:14.273559   36778 start.go:296] duration metric: took 124.171367ms for postStartSetup
	I1209 22:50:14.273622   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetConfigRaw
	I1209 22:50:14.274207   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetIP
	I1209 22:50:14.276819   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.277127   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.277156   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.277340   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:50:14.277540   36778 start.go:128] duration metric: took 24.003111268s to createHost
	I1209 22:50:14.277563   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:14.279937   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.280232   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.280257   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.280382   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:14.280557   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.280726   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.280910   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:14.281099   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:14.281291   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:14.281304   36778 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 22:50:14.388152   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733784614.364424625
	
	I1209 22:50:14.388174   36778 fix.go:216] guest clock: 1733784614.364424625
	I1209 22:50:14.388181   36778 fix.go:229] Guest: 2024-12-09 22:50:14.364424625 +0000 UTC Remote: 2024-12-09 22:50:14.27755238 +0000 UTC m=+71.170238927 (delta=86.872245ms)
	I1209 22:50:14.388195   36778 fix.go:200] guest clock delta is within tolerance: 86.872245ms
	I1209 22:50:14.388200   36778 start.go:83] releasing machines lock for "ha-920193-m02", held for 24.113917393s
	I1209 22:50:14.388222   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.388471   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetIP
	I1209 22:50:14.391084   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.391432   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.391458   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.393935   36778 out.go:177] * Found network options:
	I1209 22:50:14.395356   36778 out.go:177]   - NO_PROXY=192.168.39.102
	W1209 22:50:14.396713   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 22:50:14.396769   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.397373   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.397558   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.397653   36778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 22:50:14.397697   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	W1209 22:50:14.397767   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 22:50:14.397855   36778 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 22:50:14.397879   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:14.400330   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.400563   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.400725   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.400755   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.400909   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:14.400944   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.400970   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.401106   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.401188   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:14.401275   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:14.401373   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.401443   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa Username:docker}
	I1209 22:50:14.401504   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:14.401614   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa Username:docker}
	I1209 22:50:14.637188   36778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 22:50:14.643200   36778 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 22:50:14.643281   36778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 22:50:14.659398   36778 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 22:50:14.659426   36778 start.go:495] detecting cgroup driver to use...
	I1209 22:50:14.659491   36778 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 22:50:14.676247   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 22:50:14.690114   36778 docker.go:217] disabling cri-docker service (if available) ...
	I1209 22:50:14.690183   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 22:50:14.704181   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 22:50:14.718407   36778 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 22:50:14.836265   36778 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 22:50:14.977440   36778 docker.go:233] disabling docker service ...
	I1209 22:50:14.977523   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 22:50:14.992218   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 22:50:15.006032   36778 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 22:50:15.132938   36778 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 22:50:15.246587   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 22:50:15.260594   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 22:50:15.278081   36778 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 22:50:15.278144   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.288215   36778 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 22:50:15.288291   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.298722   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.309333   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.319278   36778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 22:50:15.329514   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.339686   36778 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.356544   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.367167   36778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 22:50:15.376313   36778 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 22:50:15.376368   36778 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 22:50:15.389607   36778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 22:50:15.399026   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:50:15.510388   36778 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 22:50:15.594142   36778 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 22:50:15.594209   36778 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 22:50:15.598620   36778 start.go:563] Will wait 60s for crictl version
	I1209 22:50:15.598673   36778 ssh_runner.go:195] Run: which crictl
	I1209 22:50:15.602047   36778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 22:50:15.640250   36778 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 22:50:15.640331   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:50:15.667027   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:50:15.696782   36778 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 22:50:15.698100   36778 out.go:177]   - env NO_PROXY=192.168.39.102
	I1209 22:50:15.699295   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetIP
	I1209 22:50:15.701971   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:15.702367   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:15.702391   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:15.702593   36778 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 22:50:15.706559   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:50:15.719413   36778 mustload.go:65] Loading cluster: ha-920193
	I1209 22:50:15.719679   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:50:15.720045   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:50:15.720080   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:50:15.735359   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34745
	I1209 22:50:15.735806   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:50:15.736258   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:50:15.736277   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:50:15.736597   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:50:15.736809   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:50:15.738383   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:50:15.738784   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:50:15.738819   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:50:15.754087   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34025
	I1209 22:50:15.754545   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:50:15.755016   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:50:15.755039   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:50:15.755363   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:50:15.755658   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:50:15.755811   36778 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193 for IP: 192.168.39.43
	I1209 22:50:15.755825   36778 certs.go:194] generating shared ca certs ...
	I1209 22:50:15.755842   36778 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:50:15.756003   36778 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 22:50:15.756062   36778 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 22:50:15.756077   36778 certs.go:256] generating profile certs ...
	I1209 22:50:15.756191   36778 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key
	I1209 22:50:15.756224   36778 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.4dc3270a
	I1209 22:50:15.756244   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.4dc3270a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.43 192.168.39.254]
	I1209 22:50:15.922567   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.4dc3270a ...
	I1209 22:50:15.922607   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.4dc3270a: {Name:mkdd9b3ceabde3bba17fb86e02452182c7c5df88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:50:15.922833   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.4dc3270a ...
	I1209 22:50:15.922852   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.4dc3270a: {Name:mkf2dc6e973669b6272c7472a098255f36b1b21a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:50:15.922964   36778 certs.go:381] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.4dc3270a -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt
	I1209 22:50:15.923108   36778 certs.go:385] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.4dc3270a -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key
	I1209 22:50:15.923250   36778 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key
	I1209 22:50:15.923268   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 22:50:15.923283   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 22:50:15.923300   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 22:50:15.923315   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 22:50:15.923331   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 22:50:15.923346   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 22:50:15.923361   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 22:50:15.923376   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 22:50:15.923447   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 22:50:15.923481   36778 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 22:50:15.923492   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 22:50:15.923526   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 22:50:15.923552   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 22:50:15.923617   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 22:50:15.923669   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:50:15.923701   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /usr/share/ca-certificates/262532.pem
	I1209 22:50:15.923718   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:50:15.923736   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem -> /usr/share/ca-certificates/26253.pem
	I1209 22:50:15.923774   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:50:15.926684   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:50:15.927100   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:50:15.927132   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:50:15.927316   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:50:15.927520   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:50:15.927686   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:50:15.927817   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:50:15.995984   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1209 22:50:16.000689   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1209 22:50:16.010769   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1209 22:50:16.015461   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1209 22:50:16.025382   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1209 22:50:16.029170   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1209 22:50:16.038869   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1209 22:50:16.042928   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1209 22:50:16.052680   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1209 22:50:16.056624   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1209 22:50:16.067154   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1209 22:50:16.071136   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1209 22:50:16.081380   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 22:50:16.105907   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 22:50:16.130202   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 22:50:16.154712   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 22:50:16.178136   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1209 22:50:16.201144   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 22:50:16.223968   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 22:50:16.245967   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 22:50:16.268545   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 22:50:16.290945   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 22:50:16.313125   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 22:50:16.335026   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1209 22:50:16.350896   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1209 22:50:16.366797   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1209 22:50:16.382304   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1209 22:50:16.398151   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1209 22:50:16.413542   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1209 22:50:16.428943   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1209 22:50:16.443894   36778 ssh_runner.go:195] Run: openssl version
	I1209 22:50:16.449370   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 22:50:16.460122   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 22:50:16.464413   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 22:50:16.464474   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 22:50:16.470266   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 22:50:16.480854   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 22:50:16.491307   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 22:50:16.495420   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 22:50:16.495468   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 22:50:16.500658   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 22:50:16.511025   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 22:50:16.521204   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:50:16.525268   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:50:16.525347   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:50:16.530531   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 22:50:16.542187   36778 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 22:50:16.546109   36778 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 22:50:16.546164   36778 kubeadm.go:934] updating node {m02 192.168.39.43 8443 v1.31.2 crio true true} ...
	I1209 22:50:16.546250   36778 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-920193-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 22:50:16.546279   36778 kube-vip.go:115] generating kube-vip config ...
	I1209 22:50:16.546321   36778 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 22:50:16.565259   36778 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 22:50:16.565317   36778 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 22:50:16.565368   36778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 22:50:16.576227   36778 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1209 22:50:16.576286   36778 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1209 22:50:16.587283   36778 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1209 22:50:16.587313   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 22:50:16.587347   36778 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1209 22:50:16.587371   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 22:50:16.587429   36778 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1209 22:50:16.591406   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1209 22:50:16.591443   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1209 22:50:17.403840   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 22:50:17.403917   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 22:50:17.408515   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1209 22:50:17.408550   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1209 22:50:17.508668   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:50:17.539619   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 22:50:17.539709   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 22:50:17.547698   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1209 22:50:17.547746   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1209 22:50:17.976645   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1209 22:50:17.986050   36778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 22:50:18.001981   36778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 22:50:18.017737   36778 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 22:50:18.034382   36778 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 22:50:18.038243   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:50:18.051238   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:50:18.168167   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:50:18.185010   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:50:18.185466   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:50:18.185511   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:50:18.200608   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36771
	I1209 22:50:18.201083   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:50:18.201577   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:50:18.201599   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:50:18.201983   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:50:18.202177   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:50:18.202335   36778 start.go:317] joinCluster: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:50:18.202454   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1209 22:50:18.202478   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:50:18.205838   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:50:18.206272   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:50:18.206305   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:50:18.206454   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:50:18.206651   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:50:18.206809   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:50:18.206953   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:50:18.346102   36778 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:50:18.346151   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0e0mum.qzhjvrjwvxlgpdn7 --discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-920193-m02 --control-plane --apiserver-advertise-address=192.168.39.43 --apiserver-bind-port=8443"
	I1209 22:50:38.220755   36778 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0e0mum.qzhjvrjwvxlgpdn7 --discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-920193-m02 --control-plane --apiserver-advertise-address=192.168.39.43 --apiserver-bind-port=8443": (19.874577958s)
	I1209 22:50:38.220795   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1209 22:50:38.605694   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-920193-m02 minikube.k8s.io/updated_at=2024_12_09T22_50_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=ha-920193 minikube.k8s.io/primary=false
	I1209 22:50:38.732046   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-920193-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1209 22:50:38.853470   36778 start.go:319] duration metric: took 20.651129665s to joinCluster
	I1209 22:50:38.853557   36778 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:50:38.853987   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:50:38.855541   36778 out.go:177] * Verifying Kubernetes components...
	I1209 22:50:38.856758   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:50:39.134622   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:50:39.155772   36778 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:50:39.156095   36778 kapi.go:59] client config for ha-920193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key", CAFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c9a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1209 22:50:39.156174   36778 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I1209 22:50:39.156458   36778 node_ready.go:35] waiting up to 6m0s for node "ha-920193-m02" to be "Ready" ...
	I1209 22:50:39.156557   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:39.156569   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:39.156580   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:39.156589   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:39.166040   36778 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1209 22:50:39.656808   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:39.656835   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:39.656848   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:39.656853   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:39.660666   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:40.157282   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:40.157306   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:40.157314   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:40.157319   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:40.171594   36778 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1209 22:50:40.656953   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:40.656975   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:40.656984   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:40.656988   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:40.660321   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:41.157246   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:41.157267   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:41.157275   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:41.157278   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:41.160595   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:41.161242   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:41.657713   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:41.657743   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:41.657754   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:41.657760   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:41.661036   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:42.157055   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:42.157081   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:42.157092   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:42.157098   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:42.160406   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:42.657502   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:42.657525   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:42.657535   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:42.657543   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:42.660437   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:43.157580   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:43.157601   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:43.157610   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:43.157614   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:43.159874   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:43.657603   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:43.657624   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:43.657631   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:43.657635   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:43.661418   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:43.662212   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:44.157154   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:44.157180   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:44.157193   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:44.157199   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:44.160641   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:44.657594   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:44.657632   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:44.657639   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:44.657643   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:44.660444   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:45.156643   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:45.156665   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:45.156673   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:45.156678   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:45.159591   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:45.656824   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:45.656848   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:45.656860   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:45.656865   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:45.660567   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:46.157410   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:46.157431   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:46.157440   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:46.157444   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:46.164952   36778 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1209 22:50:46.165425   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:46.656667   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:46.656688   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:46.656695   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:46.656701   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:46.660336   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:47.157296   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:47.157321   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:47.157329   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:47.157332   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:47.160332   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:47.657301   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:47.657323   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:47.657331   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:47.657336   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:47.660325   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:48.157563   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:48.157584   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:48.157594   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:48.157608   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:48.160951   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:48.657246   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:48.657273   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:48.657284   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:48.657292   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:48.660393   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:48.661028   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:49.157387   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:49.157407   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:49.157413   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:49.157418   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:49.160553   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:49.656857   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:49.656876   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:49.656884   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:49.656887   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:49.660150   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:50.157105   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:50.157127   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:50.157135   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:50.157138   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:50.160132   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:50.657157   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:50.657175   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:50.657183   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:50.657186   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:50.660060   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:51.156681   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:51.156703   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:51.156710   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:51.156715   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:51.160061   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:51.160485   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:51.656792   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:51.656814   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:51.656822   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:51.656828   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:51.660462   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:52.157422   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:52.157444   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:52.157452   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:52.157456   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:52.160620   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:52.657587   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:52.657612   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:52.657623   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:52.657635   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:52.661805   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:50:53.156794   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:53.156813   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:53.156820   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:53.156824   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:53.159611   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:53.657422   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:53.657443   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:53.657451   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:53.657456   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:53.660973   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:53.661490   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:54.156741   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:54.156775   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:54.156788   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:54.156793   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:54.159842   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:54.657520   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:54.657542   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:54.657551   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:54.657556   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:54.661360   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:55.157356   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:55.157381   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:55.157389   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:55.157398   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:55.160974   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:55.657357   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:55.657380   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:55.657386   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:55.657389   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:55.661109   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:55.661633   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:56.156805   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:56.156829   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:56.156842   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:56.156848   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:56.159652   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:56.657355   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:56.657382   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:56.657391   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:56.657396   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:56.660284   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.156798   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:57.156817   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.156825   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.156828   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.159439   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.160184   36778 node_ready.go:49] node "ha-920193-m02" has status "Ready":"True"
	I1209 22:50:57.160211   36778 node_ready.go:38] duration metric: took 18.003728094s for node "ha-920193-m02" to be "Ready" ...
	I1209 22:50:57.160219   36778 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:50:57.160281   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:50:57.160291   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.160297   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.160301   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.163826   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.171109   36778 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.171198   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9792g
	I1209 22:50:57.171207   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.171215   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.171218   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.175686   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:50:57.176418   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.176433   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.176440   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.176445   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.178918   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.179482   36778 pod_ready.go:93] pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.179502   36778 pod_ready.go:82] duration metric: took 8.366716ms for pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.179511   36778 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.179579   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pftgv
	I1209 22:50:57.179590   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.179601   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.179607   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.181884   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.182566   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.182584   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.182593   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.182603   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.184849   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.185336   36778 pod_ready.go:93] pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.185356   36778 pod_ready.go:82] duration metric: took 5.835616ms for pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.185369   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.185431   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193
	I1209 22:50:57.185440   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.185446   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.185452   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.187419   36778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1209 22:50:57.188120   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.188138   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.188148   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.188155   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.190287   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.190719   36778 pod_ready.go:93] pod "etcd-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.190736   36778 pod_ready.go:82] duration metric: took 5.359942ms for pod "etcd-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.190748   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.190809   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193-m02
	I1209 22:50:57.190819   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.190828   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.190835   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.192882   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.193624   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:57.193638   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.193645   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.193648   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.195725   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.196308   36778 pod_ready.go:93] pod "etcd-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.196330   36778 pod_ready.go:82] duration metric: took 5.570375ms for pod "etcd-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.196346   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.357701   36778 request.go:632] Waited for 161.300261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193
	I1209 22:50:57.357803   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193
	I1209 22:50:57.357815   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.357826   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.357831   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.361143   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.557163   36778 request.go:632] Waited for 195.392304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.557255   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.557275   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.557286   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.557299   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.560687   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.561270   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.561292   36778 pod_ready.go:82] duration metric: took 364.939583ms for pod "kube-apiserver-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.561303   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.757400   36778 request.go:632] Waited for 196.034135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m02
	I1209 22:50:57.757501   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m02
	I1209 22:50:57.757514   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.757525   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.757533   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.761021   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.957152   36778 request.go:632] Waited for 195.395123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:57.957252   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:57.957262   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.957269   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.957273   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.961000   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.961523   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.961541   36778 pod_ready.go:82] duration metric: took 400.228352ms for pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.961551   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.157823   36778 request.go:632] Waited for 196.207607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193
	I1209 22:50:58.157936   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193
	I1209 22:50:58.157948   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.157956   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.157960   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.161121   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:58.357017   36778 request.go:632] Waited for 194.771557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:58.357073   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:58.357091   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.357099   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.357103   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.360088   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:58.360518   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:58.360541   36778 pod_ready.go:82] duration metric: took 398.983882ms for pod "kube-controller-manager-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.360551   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.557689   36778 request.go:632] Waited for 197.047701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m02
	I1209 22:50:58.557763   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m02
	I1209 22:50:58.557772   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.557779   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.557783   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.561314   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:58.757454   36778 request.go:632] Waited for 195.361025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:58.757514   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:58.757519   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.757531   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.757540   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.760353   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:58.760931   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:58.760952   36778 pod_ready.go:82] duration metric: took 400.394843ms for pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.760961   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lntbt" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.956933   36778 request.go:632] Waited for 195.877051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lntbt
	I1209 22:50:58.956989   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lntbt
	I1209 22:50:58.956993   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.957001   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.957005   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.960313   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:59.157481   36778 request.go:632] Waited for 196.370711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:59.157545   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:59.157551   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.157558   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.157562   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.160790   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:59.161308   36778 pod_ready.go:93] pod "kube-proxy-lntbt" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:59.161325   36778 pod_ready.go:82] duration metric: took 400.358082ms for pod "kube-proxy-lntbt" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.161334   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r8nhm" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.357539   36778 request.go:632] Waited for 196.144123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r8nhm
	I1209 22:50:59.357600   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r8nhm
	I1209 22:50:59.357605   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.357614   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.357619   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.360709   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:59.557525   36778 request.go:632] Waited for 196.134266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:59.557582   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:59.557587   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.557594   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.557599   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.561037   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:59.561650   36778 pod_ready.go:93] pod "kube-proxy-r8nhm" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:59.561671   36778 pod_ready.go:82] duration metric: took 400.330133ms for pod "kube-proxy-r8nhm" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.561686   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.757716   36778 request.go:632] Waited for 195.957167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193
	I1209 22:50:59.757794   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193
	I1209 22:50:59.757799   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.757806   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.757810   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.760629   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:59.957516   36778 request.go:632] Waited for 196.356707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:59.957571   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:59.957576   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.957583   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.957589   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.960569   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:59.961033   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:59.961052   36778 pod_ready.go:82] duration metric: took 399.355328ms for pod "kube-scheduler-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.961065   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:51:00.157215   36778 request.go:632] Waited for 196.068129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m02
	I1209 22:51:00.157354   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m02
	I1209 22:51:00.157371   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.157385   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.157393   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.160825   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:00.357607   36778 request.go:632] Waited for 196.256861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:51:00.357660   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:51:00.357665   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.357673   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.357676   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.360928   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:00.361370   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:51:00.361388   36778 pod_ready.go:82] duration metric: took 400.315143ms for pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:51:00.361398   36778 pod_ready.go:39] duration metric: took 3.201168669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:51:00.361416   36778 api_server.go:52] waiting for apiserver process to appear ...
	I1209 22:51:00.361461   36778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 22:51:00.375321   36778 api_server.go:72] duration metric: took 21.521720453s to wait for apiserver process to appear ...
	I1209 22:51:00.375346   36778 api_server.go:88] waiting for apiserver healthz status ...
	I1209 22:51:00.375364   36778 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I1209 22:51:00.379577   36778 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I1209 22:51:00.379640   36778 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I1209 22:51:00.379648   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.379656   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.379662   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.380589   36778 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1209 22:51:00.380716   36778 api_server.go:141] control plane version: v1.31.2
	I1209 22:51:00.380756   36778 api_server.go:131] duration metric: took 5.402425ms to wait for apiserver health ...
	I1209 22:51:00.380766   36778 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 22:51:00.557205   36778 request.go:632] Waited for 176.35448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:51:00.557271   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:51:00.557277   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.557284   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.557289   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.561926   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:00.568583   36778 system_pods.go:59] 17 kube-system pods found
	I1209 22:51:00.568619   36778 system_pods.go:61] "coredns-7c65d6cfc9-9792g" [61c6326d-4d13-49b7-9fae-c4f8906c3c7e] Running
	I1209 22:51:00.568631   36778 system_pods.go:61] "coredns-7c65d6cfc9-pftgv" [eb45cffe-b666-435b-ada3-605735ee43c1] Running
	I1209 22:51:00.568637   36778 system_pods.go:61] "etcd-ha-920193" [70c28c05-bf86-4a56-9863-0e836de39230] Running
	I1209 22:51:00.568643   36778 system_pods.go:61] "etcd-ha-920193-m02" [b1746065-266b-487c-8aa5-629d5b421a3b] Running
	I1209 22:51:00.568648   36778 system_pods.go:61] "kindnet-7bbbc" [76f14f4c-3759-45b3-8161-78d85c44b5a6] Running
	I1209 22:51:00.568652   36778 system_pods.go:61] "kindnet-rcctv" [e9580e78-cfcb-45b6-9e90-f37ce3881c6b] Running
	I1209 22:51:00.568657   36778 system_pods.go:61] "kube-apiserver-ha-920193" [9c15e33e-f7c1-4b0b-84de-718043e0ea1b] Running
	I1209 22:51:00.568662   36778 system_pods.go:61] "kube-apiserver-ha-920193-m02" [d05fff94-7b3f-4127-8164-3c1a838bb71c] Running
	I1209 22:51:00.568672   36778 system_pods.go:61] "kube-controller-manager-ha-920193" [4a411c41-cba8-4cb3-beec-fcd3f9a8ade1] Running
	I1209 22:51:00.568677   36778 system_pods.go:61] "kube-controller-manager-ha-920193-m02" [57181858-4c99-4afa-b94c-4ba320fc293d] Running
	I1209 22:51:00.568681   36778 system_pods.go:61] "kube-proxy-lntbt" [8eb5538d-7ac0-4949-b7f9-29d5530f4521] Running
	I1209 22:51:00.568687   36778 system_pods.go:61] "kube-proxy-r8nhm" [355570de-1c30-4aa2-a56c-a06639cc339c] Running
	I1209 22:51:00.568692   36778 system_pods.go:61] "kube-scheduler-ha-920193" [60b5be1e-f9fe-4c30-b435-7d321c484bc4] Running
	I1209 22:51:00.568699   36778 system_pods.go:61] "kube-scheduler-ha-920193-m02" [a35e79b6-6cf2-4669-95fb-c1f358c811d2] Running
	I1209 22:51:00.568703   36778 system_pods.go:61] "kube-vip-ha-920193" [4f238d3a-6219-4e4d-89a8-22f73def8440] Running
	I1209 22:51:00.568709   36778 system_pods.go:61] "kube-vip-ha-920193-m02" [0fe8360d-b418-47dd-bcce-a8d5a6c4c29d] Running
	I1209 22:51:00.568713   36778 system_pods.go:61] "storage-provisioner" [cb31984d-7602-406c-8c77-ce8571cdaa52] Running
	I1209 22:51:00.568720   36778 system_pods.go:74] duration metric: took 187.947853ms to wait for pod list to return data ...
	I1209 22:51:00.568736   36778 default_sa.go:34] waiting for default service account to be created ...
	I1209 22:51:00.757459   36778 request.go:632] Waited for 188.649373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1209 22:51:00.757529   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1209 22:51:00.757535   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.757542   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.757549   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.761133   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:00.761462   36778 default_sa.go:45] found service account: "default"
	I1209 22:51:00.761484   36778 default_sa.go:55] duration metric: took 192.741843ms for default service account to be created ...
	I1209 22:51:00.761493   36778 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 22:51:00.957815   36778 request.go:632] Waited for 196.251364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:51:00.957869   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:51:00.957874   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.957881   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.957886   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.962434   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:00.967784   36778 system_pods.go:86] 17 kube-system pods found
	I1209 22:51:00.967807   36778 system_pods.go:89] "coredns-7c65d6cfc9-9792g" [61c6326d-4d13-49b7-9fae-c4f8906c3c7e] Running
	I1209 22:51:00.967813   36778 system_pods.go:89] "coredns-7c65d6cfc9-pftgv" [eb45cffe-b666-435b-ada3-605735ee43c1] Running
	I1209 22:51:00.967818   36778 system_pods.go:89] "etcd-ha-920193" [70c28c05-bf86-4a56-9863-0e836de39230] Running
	I1209 22:51:00.967822   36778 system_pods.go:89] "etcd-ha-920193-m02" [b1746065-266b-487c-8aa5-629d5b421a3b] Running
	I1209 22:51:00.967825   36778 system_pods.go:89] "kindnet-7bbbc" [76f14f4c-3759-45b3-8161-78d85c44b5a6] Running
	I1209 22:51:00.967829   36778 system_pods.go:89] "kindnet-rcctv" [e9580e78-cfcb-45b6-9e90-f37ce3881c6b] Running
	I1209 22:51:00.967832   36778 system_pods.go:89] "kube-apiserver-ha-920193" [9c15e33e-f7c1-4b0b-84de-718043e0ea1b] Running
	I1209 22:51:00.967836   36778 system_pods.go:89] "kube-apiserver-ha-920193-m02" [d05fff94-7b3f-4127-8164-3c1a838bb71c] Running
	I1209 22:51:00.967839   36778 system_pods.go:89] "kube-controller-manager-ha-920193" [4a411c41-cba8-4cb3-beec-fcd3f9a8ade1] Running
	I1209 22:51:00.967843   36778 system_pods.go:89] "kube-controller-manager-ha-920193-m02" [57181858-4c99-4afa-b94c-4ba320fc293d] Running
	I1209 22:51:00.967846   36778 system_pods.go:89] "kube-proxy-lntbt" [8eb5538d-7ac0-4949-b7f9-29d5530f4521] Running
	I1209 22:51:00.967849   36778 system_pods.go:89] "kube-proxy-r8nhm" [355570de-1c30-4aa2-a56c-a06639cc339c] Running
	I1209 22:51:00.967853   36778 system_pods.go:89] "kube-scheduler-ha-920193" [60b5be1e-f9fe-4c30-b435-7d321c484bc4] Running
	I1209 22:51:00.967856   36778 system_pods.go:89] "kube-scheduler-ha-920193-m02" [a35e79b6-6cf2-4669-95fb-c1f358c811d2] Running
	I1209 22:51:00.967859   36778 system_pods.go:89] "kube-vip-ha-920193" [4f238d3a-6219-4e4d-89a8-22f73def8440] Running
	I1209 22:51:00.967862   36778 system_pods.go:89] "kube-vip-ha-920193-m02" [0fe8360d-b418-47dd-bcce-a8d5a6c4c29d] Running
	I1209 22:51:00.967865   36778 system_pods.go:89] "storage-provisioner" [cb31984d-7602-406c-8c77-ce8571cdaa52] Running
	I1209 22:51:00.967872   36778 system_pods.go:126] duration metric: took 206.369849ms to wait for k8s-apps to be running ...
	I1209 22:51:00.967881   36778 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 22:51:00.967920   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:51:00.982635   36778 system_svc.go:56] duration metric: took 14.746001ms WaitForService to wait for kubelet
	I1209 22:51:00.982658   36778 kubeadm.go:582] duration metric: took 22.129061399s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 22:51:00.982676   36778 node_conditions.go:102] verifying NodePressure condition ...
	I1209 22:51:01.157065   36778 request.go:632] Waited for 174.324712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I1209 22:51:01.157132   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I1209 22:51:01.157137   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:01.157146   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:01.157150   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:01.161631   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:01.162406   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:51:01.162427   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:51:01.162443   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:51:01.162449   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:51:01.162454   36778 node_conditions.go:105] duration metric: took 179.774021ms to run NodePressure ...
	I1209 22:51:01.162470   36778 start.go:241] waiting for startup goroutines ...
	I1209 22:51:01.162500   36778 start.go:255] writing updated cluster config ...
	I1209 22:51:01.164529   36778 out.go:201] 
	I1209 22:51:01.165967   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:51:01.166048   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:51:01.167621   36778 out.go:177] * Starting "ha-920193-m03" control-plane node in "ha-920193" cluster
	I1209 22:51:01.168868   36778 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:51:01.168885   36778 cache.go:56] Caching tarball of preloaded images
	I1209 22:51:01.168992   36778 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 22:51:01.169010   36778 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 22:51:01.169110   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:51:01.169269   36778 start.go:360] acquireMachinesLock for ha-920193-m03: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 22:51:01.169312   36778 start.go:364] duration metric: took 23.987µs to acquireMachinesLock for "ha-920193-m03"
	I1209 22:51:01.169336   36778 start.go:93] Provisioning new machine with config: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:51:01.169433   36778 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1209 22:51:01.171416   36778 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 22:51:01.171522   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:51:01.171583   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:51:01.186366   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42027
	I1209 22:51:01.186874   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:51:01.187404   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:51:01.187428   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:51:01.187781   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:51:01.187979   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetMachineName
	I1209 22:51:01.188140   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:01.188306   36778 start.go:159] libmachine.API.Create for "ha-920193" (driver="kvm2")
	I1209 22:51:01.188339   36778 client.go:168] LocalClient.Create starting
	I1209 22:51:01.188376   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem
	I1209 22:51:01.188415   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:51:01.188430   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:51:01.188479   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem
	I1209 22:51:01.188497   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:51:01.188505   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:51:01.188519   36778 main.go:141] libmachine: Running pre-create checks...
	I1209 22:51:01.188524   36778 main.go:141] libmachine: (ha-920193-m03) Calling .PreCreateCheck
	I1209 22:51:01.188706   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetConfigRaw
	I1209 22:51:01.189120   36778 main.go:141] libmachine: Creating machine...
	I1209 22:51:01.189133   36778 main.go:141] libmachine: (ha-920193-m03) Calling .Create
	I1209 22:51:01.189263   36778 main.go:141] libmachine: (ha-920193-m03) Creating KVM machine...
	I1209 22:51:01.190619   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found existing default KVM network
	I1209 22:51:01.190780   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found existing private KVM network mk-ha-920193
	I1209 22:51:01.190893   36778 main.go:141] libmachine: (ha-920193-m03) Setting up store path in /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03 ...
	I1209 22:51:01.190907   36778 main.go:141] libmachine: (ha-920193-m03) Building disk image from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 22:51:01.191000   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:01.190898   37541 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:51:01.191087   36778 main.go:141] libmachine: (ha-920193-m03) Downloading /home/jenkins/minikube-integration/19888-18950/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 22:51:01.428399   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:01.428270   37541 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa...
	I1209 22:51:01.739906   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:01.739799   37541 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/ha-920193-m03.rawdisk...
	I1209 22:51:01.739933   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Writing magic tar header
	I1209 22:51:01.739943   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Writing SSH key tar header
	I1209 22:51:01.739951   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:01.739915   37541 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03 ...
	I1209 22:51:01.740035   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03
	I1209 22:51:01.740064   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03 (perms=drwx------)
	I1209 22:51:01.740080   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines
	I1209 22:51:01.740097   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:51:01.740107   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950
	I1209 22:51:01.740114   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines (perms=drwxr-xr-x)
	I1209 22:51:01.740127   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube (perms=drwxr-xr-x)
	I1209 22:51:01.740140   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950 (perms=drwxrwxr-x)
	I1209 22:51:01.740154   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 22:51:01.740167   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 22:51:01.740178   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins
	I1209 22:51:01.740189   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home
	I1209 22:51:01.740219   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 22:51:01.740244   36778 main.go:141] libmachine: (ha-920193-m03) Creating domain...
	I1209 22:51:01.740252   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Skipping /home - not owner
	I1209 22:51:01.741067   36778 main.go:141] libmachine: (ha-920193-m03) define libvirt domain using xml: 
	I1209 22:51:01.741086   36778 main.go:141] libmachine: (ha-920193-m03) <domain type='kvm'>
	I1209 22:51:01.741093   36778 main.go:141] libmachine: (ha-920193-m03)   <name>ha-920193-m03</name>
	I1209 22:51:01.741098   36778 main.go:141] libmachine: (ha-920193-m03)   <memory unit='MiB'>2200</memory>
	I1209 22:51:01.741103   36778 main.go:141] libmachine: (ha-920193-m03)   <vcpu>2</vcpu>
	I1209 22:51:01.741110   36778 main.go:141] libmachine: (ha-920193-m03)   <features>
	I1209 22:51:01.741115   36778 main.go:141] libmachine: (ha-920193-m03)     <acpi/>
	I1209 22:51:01.741119   36778 main.go:141] libmachine: (ha-920193-m03)     <apic/>
	I1209 22:51:01.741124   36778 main.go:141] libmachine: (ha-920193-m03)     <pae/>
	I1209 22:51:01.741128   36778 main.go:141] libmachine: (ha-920193-m03)     
	I1209 22:51:01.741133   36778 main.go:141] libmachine: (ha-920193-m03)   </features>
	I1209 22:51:01.741147   36778 main.go:141] libmachine: (ha-920193-m03)   <cpu mode='host-passthrough'>
	I1209 22:51:01.741152   36778 main.go:141] libmachine: (ha-920193-m03)   
	I1209 22:51:01.741157   36778 main.go:141] libmachine: (ha-920193-m03)   </cpu>
	I1209 22:51:01.741162   36778 main.go:141] libmachine: (ha-920193-m03)   <os>
	I1209 22:51:01.741166   36778 main.go:141] libmachine: (ha-920193-m03)     <type>hvm</type>
	I1209 22:51:01.741171   36778 main.go:141] libmachine: (ha-920193-m03)     <boot dev='cdrom'/>
	I1209 22:51:01.741176   36778 main.go:141] libmachine: (ha-920193-m03)     <boot dev='hd'/>
	I1209 22:51:01.741184   36778 main.go:141] libmachine: (ha-920193-m03)     <bootmenu enable='no'/>
	I1209 22:51:01.741188   36778 main.go:141] libmachine: (ha-920193-m03)   </os>
	I1209 22:51:01.741225   36778 main.go:141] libmachine: (ha-920193-m03)   <devices>
	I1209 22:51:01.741245   36778 main.go:141] libmachine: (ha-920193-m03)     <disk type='file' device='cdrom'>
	I1209 22:51:01.741288   36778 main.go:141] libmachine: (ha-920193-m03)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/boot2docker.iso'/>
	I1209 22:51:01.741325   36778 main.go:141] libmachine: (ha-920193-m03)       <target dev='hdc' bus='scsi'/>
	I1209 22:51:01.741339   36778 main.go:141] libmachine: (ha-920193-m03)       <readonly/>
	I1209 22:51:01.741350   36778 main.go:141] libmachine: (ha-920193-m03)     </disk>
	I1209 22:51:01.741361   36778 main.go:141] libmachine: (ha-920193-m03)     <disk type='file' device='disk'>
	I1209 22:51:01.741373   36778 main.go:141] libmachine: (ha-920193-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 22:51:01.741386   36778 main.go:141] libmachine: (ha-920193-m03)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/ha-920193-m03.rawdisk'/>
	I1209 22:51:01.741397   36778 main.go:141] libmachine: (ha-920193-m03)       <target dev='hda' bus='virtio'/>
	I1209 22:51:01.741408   36778 main.go:141] libmachine: (ha-920193-m03)     </disk>
	I1209 22:51:01.741418   36778 main.go:141] libmachine: (ha-920193-m03)     <interface type='network'>
	I1209 22:51:01.741429   36778 main.go:141] libmachine: (ha-920193-m03)       <source network='mk-ha-920193'/>
	I1209 22:51:01.741437   36778 main.go:141] libmachine: (ha-920193-m03)       <model type='virtio'/>
	I1209 22:51:01.741447   36778 main.go:141] libmachine: (ha-920193-m03)     </interface>
	I1209 22:51:01.741456   36778 main.go:141] libmachine: (ha-920193-m03)     <interface type='network'>
	I1209 22:51:01.741472   36778 main.go:141] libmachine: (ha-920193-m03)       <source network='default'/>
	I1209 22:51:01.741483   36778 main.go:141] libmachine: (ha-920193-m03)       <model type='virtio'/>
	I1209 22:51:01.741496   36778 main.go:141] libmachine: (ha-920193-m03)     </interface>
	I1209 22:51:01.741507   36778 main.go:141] libmachine: (ha-920193-m03)     <serial type='pty'>
	I1209 22:51:01.741516   36778 main.go:141] libmachine: (ha-920193-m03)       <target port='0'/>
	I1209 22:51:01.741525   36778 main.go:141] libmachine: (ha-920193-m03)     </serial>
	I1209 22:51:01.741534   36778 main.go:141] libmachine: (ha-920193-m03)     <console type='pty'>
	I1209 22:51:01.741544   36778 main.go:141] libmachine: (ha-920193-m03)       <target type='serial' port='0'/>
	I1209 22:51:01.741552   36778 main.go:141] libmachine: (ha-920193-m03)     </console>
	I1209 22:51:01.741566   36778 main.go:141] libmachine: (ha-920193-m03)     <rng model='virtio'>
	I1209 22:51:01.741580   36778 main.go:141] libmachine: (ha-920193-m03)       <backend model='random'>/dev/random</backend>
	I1209 22:51:01.741590   36778 main.go:141] libmachine: (ha-920193-m03)     </rng>
	I1209 22:51:01.741597   36778 main.go:141] libmachine: (ha-920193-m03)     
	I1209 22:51:01.741606   36778 main.go:141] libmachine: (ha-920193-m03)     
	I1209 22:51:01.741616   36778 main.go:141] libmachine: (ha-920193-m03)   </devices>
	I1209 22:51:01.741623   36778 main.go:141] libmachine: (ha-920193-m03) </domain>
	I1209 22:51:01.741635   36778 main.go:141] libmachine: (ha-920193-m03) 
	I1209 22:51:01.749628   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:ca:84:fc in network default
	I1209 22:51:01.750354   36778 main.go:141] libmachine: (ha-920193-m03) Ensuring networks are active...
	I1209 22:51:01.750395   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:01.751100   36778 main.go:141] libmachine: (ha-920193-m03) Ensuring network default is active
	I1209 22:51:01.751465   36778 main.go:141] libmachine: (ha-920193-m03) Ensuring network mk-ha-920193 is active
	I1209 22:51:01.751930   36778 main.go:141] libmachine: (ha-920193-m03) Getting domain xml...
	I1209 22:51:01.752802   36778 main.go:141] libmachine: (ha-920193-m03) Creating domain...
	I1209 22:51:03.003454   36778 main.go:141] libmachine: (ha-920193-m03) Waiting to get IP...
	I1209 22:51:03.004238   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:03.004647   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:03.004670   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:03.004626   37541 retry.go:31] will retry after 297.46379ms: waiting for machine to come up
	I1209 22:51:03.304151   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:03.304627   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:03.304651   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:03.304586   37541 retry.go:31] will retry after 341.743592ms: waiting for machine to come up
	I1209 22:51:03.648185   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:03.648648   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:03.648681   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:03.648610   37541 retry.go:31] will retry after 348.703343ms: waiting for machine to come up
	I1209 22:51:03.999220   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:03.999761   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:03.999783   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:03.999722   37541 retry.go:31] will retry after 471.208269ms: waiting for machine to come up
	I1209 22:51:04.473118   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:04.473644   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:04.473698   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:04.473622   37541 retry.go:31] will retry after 567.031016ms: waiting for machine to come up
	I1209 22:51:05.042388   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:05.042845   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:05.042890   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:05.042828   37541 retry.go:31] will retry after 635.422002ms: waiting for machine to come up
	I1209 22:51:05.679729   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:05.680179   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:05.680197   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:05.680151   37541 retry.go:31] will retry after 1.009913666s: waiting for machine to come up
	I1209 22:51:06.691434   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:06.692093   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:06.692115   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:06.692049   37541 retry.go:31] will retry after 1.22911274s: waiting for machine to come up
	I1209 22:51:07.923301   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:07.923871   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:07.923895   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:07.923821   37541 retry.go:31] will retry after 1.262587003s: waiting for machine to come up
	I1209 22:51:09.187598   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:09.188051   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:09.188081   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:09.188005   37541 retry.go:31] will retry after 2.033467764s: waiting for machine to come up
	I1209 22:51:11.223284   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:11.223845   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:11.223872   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:11.223795   37541 retry.go:31] will retry after 2.889234368s: waiting for machine to come up
	I1209 22:51:14.116824   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:14.117240   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:14.117262   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:14.117201   37541 retry.go:31] will retry after 2.84022101s: waiting for machine to come up
	I1209 22:51:16.958771   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:16.959194   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:16.959219   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:16.959151   37541 retry.go:31] will retry after 3.882632517s: waiting for machine to come up
	I1209 22:51:20.846163   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:20.846626   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:20.846647   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:20.846582   37541 retry.go:31] will retry after 4.879681656s: waiting for machine to come up
	I1209 22:51:25.727341   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.727988   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has current primary IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.728010   36778 main.go:141] libmachine: (ha-920193-m03) Found IP for machine: 192.168.39.45
	I1209 22:51:25.728024   36778 main.go:141] libmachine: (ha-920193-m03) Reserving static IP address...
	I1209 22:51:25.728426   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find host DHCP lease matching {name: "ha-920193-m03", mac: "52:54:00:50:0a:7f", ip: "192.168.39.45"} in network mk-ha-920193
	I1209 22:51:25.801758   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Getting to WaitForSSH function...
	I1209 22:51:25.801788   36778 main.go:141] libmachine: (ha-920193-m03) Reserved static IP address: 192.168.39.45
	I1209 22:51:25.801801   36778 main.go:141] libmachine: (ha-920193-m03) Waiting for SSH to be available...
	I1209 22:51:25.804862   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.805259   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:minikube Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:25.805292   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.805437   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Using SSH client type: external
	I1209 22:51:25.805466   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa (-rw-------)
	I1209 22:51:25.805497   36778 main.go:141] libmachine: (ha-920193-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.45 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 22:51:25.805521   36778 main.go:141] libmachine: (ha-920193-m03) DBG | About to run SSH command:
	I1209 22:51:25.805536   36778 main.go:141] libmachine: (ha-920193-m03) DBG | exit 0
	I1209 22:51:25.927825   36778 main.go:141] libmachine: (ha-920193-m03) DBG | SSH cmd err, output: <nil>: 
	I1209 22:51:25.928111   36778 main.go:141] libmachine: (ha-920193-m03) KVM machine creation complete!
	I1209 22:51:25.928439   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetConfigRaw
	I1209 22:51:25.928948   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:25.929144   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:25.929273   36778 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 22:51:25.929318   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetState
	I1209 22:51:25.930677   36778 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 22:51:25.930689   36778 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 22:51:25.930694   36778 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 22:51:25.930702   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:25.933545   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.933940   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:25.933962   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.934133   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:25.934287   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:25.934450   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:25.934592   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:25.934747   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:25.934964   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:25.934979   36778 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 22:51:26.038809   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:51:26.038831   36778 main.go:141] libmachine: Detecting the provisioner...
	I1209 22:51:26.038839   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.041686   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.041976   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.042008   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.042164   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.042336   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.042474   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.042609   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.042802   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:26.042955   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:26.042966   36778 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 22:51:26.148122   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 22:51:26.148211   36778 main.go:141] libmachine: found compatible host: buildroot
	I1209 22:51:26.148225   36778 main.go:141] libmachine: Provisioning with buildroot...
	I1209 22:51:26.148236   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetMachineName
	I1209 22:51:26.148529   36778 buildroot.go:166] provisioning hostname "ha-920193-m03"
	I1209 22:51:26.148558   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetMachineName
	I1209 22:51:26.148758   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.151543   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.151998   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.152027   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.152153   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.152318   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.152485   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.152628   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.152792   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:26.152967   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:26.152984   36778 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-920193-m03 && echo "ha-920193-m03" | sudo tee /etc/hostname
	I1209 22:51:26.273873   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-920193-m03
	
	I1209 22:51:26.273909   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.276949   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.277338   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.277363   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.277530   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.277710   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.277857   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.278009   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.278182   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:26.278378   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:26.278395   36778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-920193-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-920193-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-920193-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 22:51:26.396863   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:51:26.396892   36778 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 22:51:26.396911   36778 buildroot.go:174] setting up certificates
	I1209 22:51:26.396941   36778 provision.go:84] configureAuth start
	I1209 22:51:26.396969   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetMachineName
	I1209 22:51:26.397249   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetIP
	I1209 22:51:26.400060   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.400552   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.400587   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.400787   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.403205   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.403621   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.403649   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.403809   36778 provision.go:143] copyHostCerts
	I1209 22:51:26.403843   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:51:26.403883   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 22:51:26.403895   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:51:26.403963   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 22:51:26.404040   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:51:26.404057   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 22:51:26.404065   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:51:26.404088   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 22:51:26.404134   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:51:26.404151   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 22:51:26.404158   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:51:26.404179   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 22:51:26.404226   36778 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.ha-920193-m03 san=[127.0.0.1 192.168.39.45 ha-920193-m03 localhost minikube]
	I1209 22:51:26.742826   36778 provision.go:177] copyRemoteCerts
	I1209 22:51:26.742899   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 22:51:26.742929   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.745666   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.745993   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.746025   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.746168   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.746370   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.746525   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.746673   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa Username:docker}
	I1209 22:51:26.830893   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 22:51:26.830957   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 22:51:26.856889   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 22:51:26.856964   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 22:51:26.883482   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 22:51:26.883555   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 22:51:26.908478   36778 provision.go:87] duration metric: took 511.5225ms to configureAuth
	I1209 22:51:26.908504   36778 buildroot.go:189] setting minikube options for container-runtime
	I1209 22:51:26.908720   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:51:26.908806   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.911525   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.911882   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.911910   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.912106   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.912305   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.912470   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.912617   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.912830   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:26.913029   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:26.913046   36778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 22:51:27.123000   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 22:51:27.123030   36778 main.go:141] libmachine: Checking connection to Docker...
	I1209 22:51:27.123040   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetURL
	I1209 22:51:27.124367   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Using libvirt version 6000000
	I1209 22:51:27.126749   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.127125   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.127158   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.127291   36778 main.go:141] libmachine: Docker is up and running!
	I1209 22:51:27.127312   36778 main.go:141] libmachine: Reticulating splines...
	I1209 22:51:27.127327   36778 client.go:171] duration metric: took 25.938971166s to LocalClient.Create
	I1209 22:51:27.127361   36778 start.go:167] duration metric: took 25.939054874s to libmachine.API.Create "ha-920193"
	I1209 22:51:27.127375   36778 start.go:293] postStartSetup for "ha-920193-m03" (driver="kvm2")
	I1209 22:51:27.127391   36778 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 22:51:27.127417   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.127659   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 22:51:27.127685   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:27.130451   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.130869   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.130897   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.131187   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:27.131380   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.131593   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:27.131737   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa Username:docker}
	I1209 22:51:27.214943   36778 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 22:51:27.219203   36778 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 22:51:27.219230   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 22:51:27.219297   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 22:51:27.219368   36778 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 22:51:27.219377   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /etc/ssl/certs/262532.pem
	I1209 22:51:27.219454   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 22:51:27.229647   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:51:27.256219   36778 start.go:296] duration metric: took 128.828108ms for postStartSetup
	I1209 22:51:27.256272   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetConfigRaw
	I1209 22:51:27.256939   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetIP
	I1209 22:51:27.259520   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.259847   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.259871   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.260187   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:51:27.260393   36778 start.go:128] duration metric: took 26.090950019s to createHost
	I1209 22:51:27.260418   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:27.262865   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.263234   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.263258   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.263424   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:27.263637   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.263812   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.263948   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:27.264111   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:27.264266   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:27.264276   36778 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 22:51:27.367958   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733784687.346724594
	
	I1209 22:51:27.367980   36778 fix.go:216] guest clock: 1733784687.346724594
	I1209 22:51:27.367990   36778 fix.go:229] Guest: 2024-12-09 22:51:27.346724594 +0000 UTC Remote: 2024-12-09 22:51:27.260405928 +0000 UTC m=+144.153092475 (delta=86.318666ms)
	I1209 22:51:27.368010   36778 fix.go:200] guest clock delta is within tolerance: 86.318666ms
	I1209 22:51:27.368017   36778 start.go:83] releasing machines lock for "ha-920193-m03", held for 26.19869273s
	I1209 22:51:27.368043   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.368295   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetIP
	I1209 22:51:27.370584   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.370886   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.370925   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.372694   36778 out.go:177] * Found network options:
	I1209 22:51:27.373916   36778 out.go:177]   - NO_PROXY=192.168.39.102,192.168.39.43
	W1209 22:51:27.375001   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	W1209 22:51:27.375023   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 22:51:27.375036   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.375488   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.375695   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.375813   36778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 22:51:27.375854   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	W1209 22:51:27.375861   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	W1209 22:51:27.375898   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 22:51:27.375979   36778 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 22:51:27.376001   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:27.378647   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.378715   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.378991   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.379016   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.379059   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.379077   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.379200   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:27.379345   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:27.379350   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.379608   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:27.379611   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.379810   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:27.379814   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa Username:docker}
	I1209 22:51:27.379979   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa Username:docker}
	I1209 22:51:27.613722   36778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 22:51:27.619553   36778 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 22:51:27.619634   36778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 22:51:27.635746   36778 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 22:51:27.635772   36778 start.go:495] detecting cgroup driver to use...
	I1209 22:51:27.635826   36778 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 22:51:27.653845   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 22:51:27.668792   36778 docker.go:217] disabling cri-docker service (if available) ...
	I1209 22:51:27.668852   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 22:51:27.683547   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 22:51:27.698233   36778 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 22:51:27.824917   36778 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 22:51:27.972308   36778 docker.go:233] disabling docker service ...
	I1209 22:51:27.972387   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 22:51:27.987195   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 22:51:28.000581   36778 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 22:51:28.137925   36778 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 22:51:28.271243   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 22:51:28.285221   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 22:51:28.303416   36778 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 22:51:28.303486   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.314415   36778 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 22:51:28.314487   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.324832   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.336511   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.346899   36778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 22:51:28.358193   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.368602   36778 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.386409   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.397070   36778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 22:51:28.406418   36778 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 22:51:28.406478   36778 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 22:51:28.419010   36778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 22:51:28.428601   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:51:28.547013   36778 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 22:51:28.639590   36778 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 22:51:28.639672   36778 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 22:51:28.644400   36778 start.go:563] Will wait 60s for crictl version
	I1209 22:51:28.644447   36778 ssh_runner.go:195] Run: which crictl
	I1209 22:51:28.648450   36778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 22:51:28.685819   36778 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 22:51:28.685915   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:51:28.713055   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:51:28.743093   36778 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 22:51:28.744486   36778 out.go:177]   - env NO_PROXY=192.168.39.102
	I1209 22:51:28.745701   36778 out.go:177]   - env NO_PROXY=192.168.39.102,192.168.39.43
	I1209 22:51:28.746682   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetIP
	I1209 22:51:28.749397   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:28.749762   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:28.749786   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:28.749968   36778 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 22:51:28.754027   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:51:28.765381   36778 mustload.go:65] Loading cluster: ha-920193
	I1209 22:51:28.765606   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:51:28.765871   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:51:28.765916   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:51:28.781482   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44651
	I1209 22:51:28.781893   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:51:28.782266   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:51:28.782287   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:51:28.782526   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:51:28.782726   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:51:28.784149   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:51:28.784420   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:51:28.784463   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:51:28.799758   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I1209 22:51:28.800232   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:51:28.800726   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:51:28.800752   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:51:28.801514   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:51:28.801709   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:51:28.801891   36778 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193 for IP: 192.168.39.45
	I1209 22:51:28.801903   36778 certs.go:194] generating shared ca certs ...
	I1209 22:51:28.801923   36778 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:51:28.802065   36778 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 22:51:28.802119   36778 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 22:51:28.802134   36778 certs.go:256] generating profile certs ...
	I1209 22:51:28.802225   36778 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key
	I1209 22:51:28.802259   36778 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.e2d8c66a
	I1209 22:51:28.802283   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.e2d8c66a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.43 192.168.39.45 192.168.39.254]
	I1209 22:51:28.918029   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.e2d8c66a ...
	I1209 22:51:28.918070   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.e2d8c66a: {Name:mkb9baad787ad98ea3bbef921d1279904d63e258 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:51:28.918300   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.e2d8c66a ...
	I1209 22:51:28.918321   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.e2d8c66a: {Name:mk6d0bc06f9a231b982576741314205a71ae81f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:51:28.918454   36778 certs.go:381] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.e2d8c66a -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt
	I1209 22:51:28.918653   36778 certs.go:385] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.e2d8c66a -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key
	I1209 22:51:28.918832   36778 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key
	I1209 22:51:28.918852   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 22:51:28.918869   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 22:51:28.918882   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 22:51:28.918897   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 22:51:28.918909   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 22:51:28.918920   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 22:51:28.918930   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 22:51:28.918940   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 22:51:28.918992   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 22:51:28.919020   36778 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 22:51:28.919030   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 22:51:28.919050   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 22:51:28.919071   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 22:51:28.919092   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 22:51:28.919165   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:51:28.919200   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem -> /usr/share/ca-certificates/26253.pem
	I1209 22:51:28.919214   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /usr/share/ca-certificates/262532.pem
	I1209 22:51:28.919226   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:51:28.919256   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:51:28.922496   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:51:28.922907   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:51:28.922924   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:51:28.923121   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:51:28.923334   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:51:28.923493   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:51:28.923637   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:51:28.995976   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1209 22:51:29.001595   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1209 22:51:29.014651   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1209 22:51:29.018976   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1209 22:51:29.031698   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1209 22:51:29.035774   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1209 22:51:29.047740   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1209 22:51:29.055239   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1209 22:51:29.068897   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1209 22:51:29.073278   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1209 22:51:29.083471   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1209 22:51:29.087771   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1209 22:51:29.099200   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 22:51:29.124484   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 22:51:29.146898   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 22:51:29.170925   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 22:51:29.194172   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1209 22:51:29.216851   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 22:51:29.238922   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 22:51:29.261472   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 22:51:29.285294   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 22:51:29.308795   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 22:51:29.332153   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 22:51:29.356878   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1209 22:51:29.373363   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1209 22:51:29.389889   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1209 22:51:29.406229   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1209 22:51:29.422321   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1209 22:51:29.439481   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1209 22:51:29.457534   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1209 22:51:29.474790   36778 ssh_runner.go:195] Run: openssl version
	I1209 22:51:29.480386   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 22:51:29.491491   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 22:51:29.496002   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 22:51:29.496065   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 22:51:29.501912   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 22:51:29.512683   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 22:51:29.523589   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:51:29.527903   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:51:29.527953   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:51:29.533408   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 22:51:29.544241   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 22:51:29.554741   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 22:51:29.559538   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 22:51:29.559622   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 22:51:29.565390   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 22:51:29.576363   36778 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 22:51:29.580324   36778 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 22:51:29.580397   36778 kubeadm.go:934] updating node {m03 192.168.39.45 8443 v1.31.2 crio true true} ...
	I1209 22:51:29.580506   36778 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-920193-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 22:51:29.580552   36778 kube-vip.go:115] generating kube-vip config ...
	I1209 22:51:29.580597   36778 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 22:51:29.601123   36778 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 22:51:29.601198   36778 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 22:51:29.601245   36778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 22:51:29.616816   36778 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1209 22:51:29.616873   36778 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1209 22:51:29.626547   36778 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1209 22:51:29.626551   36778 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1209 22:51:29.626581   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 22:51:29.626608   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 22:51:29.626662   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 22:51:29.626551   36778 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1209 22:51:29.626680   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 22:51:29.626713   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:51:29.630710   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1209 22:51:29.630743   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1209 22:51:29.661909   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 22:51:29.661957   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1209 22:51:29.661993   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1209 22:51:29.662034   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 22:51:29.693387   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1209 22:51:29.693423   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1209 22:51:30.497307   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1209 22:51:30.507919   36778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 22:51:30.525676   36778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 22:51:30.544107   36778 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 22:51:30.560963   36778 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 22:51:30.564949   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:51:30.577803   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:51:30.711834   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:51:30.729249   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:51:30.729790   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:51:30.729852   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:51:30.745894   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40813
	I1209 22:51:30.746400   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:51:30.746903   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:51:30.746923   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:51:30.747244   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:51:30.747474   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:51:30.747637   36778 start.go:317] joinCluster: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:51:30.747751   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1209 22:51:30.747772   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:51:30.750739   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:51:30.751188   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:51:30.751212   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:51:30.751382   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:51:30.751610   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:51:30.751784   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:51:30.751955   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:51:30.921112   36778 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:51:30.921184   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uin3tt.yfutths3ueks8fx7 --discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-920193-m03 --control-plane --apiserver-advertise-address=192.168.39.45 --apiserver-bind-port=8443"
	I1209 22:51:51.979391   36778 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uin3tt.yfutths3ueks8fx7 --discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-920193-m03 --control-plane --apiserver-advertise-address=192.168.39.45 --apiserver-bind-port=8443": (21.05816353s)
	I1209 22:51:51.979426   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1209 22:51:52.687851   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-920193-m03 minikube.k8s.io/updated_at=2024_12_09T22_51_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=ha-920193 minikube.k8s.io/primary=false
	I1209 22:51:52.803074   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-920193-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1209 22:51:52.923717   36778 start.go:319] duration metric: took 22.176073752s to joinCluster
	I1209 22:51:52.923810   36778 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:51:52.924248   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:51:52.925117   36778 out.go:177] * Verifying Kubernetes components...
	I1209 22:51:52.927170   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:51:53.166362   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:51:53.186053   36778 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:51:53.186348   36778 kapi.go:59] client config for ha-920193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key", CAFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c9a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1209 22:51:53.186424   36778 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I1209 22:51:53.186669   36778 node_ready.go:35] waiting up to 6m0s for node "ha-920193-m03" to be "Ready" ...
	I1209 22:51:53.186744   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:53.186755   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:53.186774   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:53.186786   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:53.191049   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:53.686961   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:53.686986   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:53.686997   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:53.687007   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:53.691244   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:54.186985   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:54.187011   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:54.187024   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:54.187030   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:54.265267   36778 round_trippers.go:574] Response Status: 200 OK in 78 milliseconds
	I1209 22:51:54.687008   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:54.687031   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:54.687042   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:54.687050   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:54.690480   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:55.187500   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:55.187525   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:55.187535   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:55.187540   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:55.191178   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:55.191830   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:51:55.687762   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:55.687790   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:55.687802   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:55.687832   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:55.691762   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:56.187494   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:56.187516   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:56.187534   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:56.187543   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:56.191706   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:56.687665   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:56.687691   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:56.687700   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:56.687705   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:56.690707   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:51:57.187710   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:57.187731   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:57.187739   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:57.187743   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:57.191208   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:57.192244   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:51:57.687242   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:57.687266   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:57.687277   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:57.687284   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:57.692231   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:58.187334   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:58.187369   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:58.187404   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:58.187410   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:58.190420   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:51:58.687040   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:58.687060   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:58.687087   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:58.687092   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:58.690458   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:59.187542   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:59.187579   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:59.187590   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:59.187598   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:59.191084   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:59.687057   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:59.687079   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:59.687087   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:59.687090   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:59.762365   36778 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I1209 22:51:59.763672   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:00.187782   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:00.187809   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:00.187824   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:00.187830   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:00.190992   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:00.687396   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:00.687424   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:00.687436   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:00.687443   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:00.690509   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:01.187706   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:01.187726   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:01.187735   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:01.187738   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:01.191284   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:01.687807   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:01.687830   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:01.687838   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:01.687841   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:01.692246   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:02.187139   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:02.187164   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:02.187172   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:02.187176   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:02.191262   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:02.191900   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:02.687239   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:02.687260   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:02.687268   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:02.687272   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:02.690588   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:03.186879   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:03.186901   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:03.186909   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:03.186913   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:03.190077   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:03.686945   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:03.686970   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:03.686976   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:03.686980   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:03.690246   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:04.187422   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:04.187453   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:04.187461   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:04.187475   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:04.190833   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:04.686862   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:04.686888   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:04.686895   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:04.686899   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:04.690474   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:04.691179   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:05.187647   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:05.187672   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:05.187680   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:05.187686   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:05.191042   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:05.687592   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:05.687619   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:05.687631   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:05.687638   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:05.695966   36778 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1209 22:52:06.187585   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:06.187617   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:06.187624   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:06.187627   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:06.190871   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:06.687343   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:06.687365   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:06.687372   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:06.687376   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:06.691065   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:06.691740   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:07.186885   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:07.186908   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:07.186916   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:07.186920   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:07.190452   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:07.687481   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:07.687506   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:07.687517   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:07.687522   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:07.690781   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:08.187842   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:08.187865   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:08.187873   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:08.187877   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:08.190745   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:52:08.687010   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:08.687039   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:08.687047   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:08.687050   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:08.690129   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:09.187057   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:09.187082   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:09.187100   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:09.187105   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:09.190445   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:09.191229   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:09.687849   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:09.687877   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:09.687887   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:09.687896   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:09.691161   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:10.187009   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:10.187030   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:10.187038   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:10.187041   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:10.190809   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:10.687323   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:10.687345   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:10.687353   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:10.687356   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:10.690476   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.187726   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:11.187753   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.187765   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.187771   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.190528   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:52:11.191296   36778 node_ready.go:49] node "ha-920193-m03" has status "Ready":"True"
	I1209 22:52:11.191322   36778 node_ready.go:38] duration metric: took 18.004635224s for node "ha-920193-m03" to be "Ready" ...
	I1209 22:52:11.191347   36778 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:52:11.191433   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:11.191446   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.191457   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.191463   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.197370   36778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 22:52:11.208757   36778 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.208877   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9792g
	I1209 22:52:11.208889   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.208900   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.208908   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.213394   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:11.214171   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.214187   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.214197   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.214204   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.217611   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.218273   36778 pod_ready.go:93] pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.218301   36778 pod_ready.go:82] duration metric: took 9.507458ms for pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.218314   36778 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.218394   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pftgv
	I1209 22:52:11.218405   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.218415   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.218420   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.221934   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.223013   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.223030   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.223037   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.223041   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.226045   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:52:11.226613   36778 pod_ready.go:93] pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.226633   36778 pod_ready.go:82] duration metric: took 8.310101ms for pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.226645   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.226713   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193
	I1209 22:52:11.226722   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.226729   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.226736   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.232210   36778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 22:52:11.233134   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.233148   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.233156   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.233159   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.236922   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.237775   36778 pod_ready.go:93] pod "etcd-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.237796   36778 pod_ready.go:82] duration metric: took 11.143234ms for pod "etcd-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.237806   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.237867   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193-m02
	I1209 22:52:11.237875   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.237882   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.237887   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.242036   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:11.242839   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:11.242858   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.242869   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.242877   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.246444   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.247204   36778 pod_ready.go:93] pod "etcd-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.247221   36778 pod_ready.go:82] duration metric: took 9.409944ms for pod "etcd-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.247231   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.388592   36778 request.go:632] Waited for 141.281694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193-m03
	I1209 22:52:11.388678   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193-m03
	I1209 22:52:11.388690   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.388704   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.388713   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.392012   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.587869   36778 request.go:632] Waited for 195.273739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:11.587951   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:11.587957   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.587964   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.587968   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.591423   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.592154   36778 pod_ready.go:93] pod "etcd-ha-920193-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.592174   36778 pod_ready.go:82] duration metric: took 344.933564ms for pod "etcd-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.592194   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.788563   36778 request.go:632] Waited for 196.298723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193
	I1209 22:52:11.788656   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193
	I1209 22:52:11.788669   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.788679   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.788687   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.792940   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:11.988037   36778 request.go:632] Waited for 194.354692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.988107   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.988113   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.988121   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.988125   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.992370   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:11.992995   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.993012   36778 pod_ready.go:82] duration metric: took 400.807496ms for pod "kube-apiserver-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.993021   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.188095   36778 request.go:632] Waited for 195.006713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m02
	I1209 22:52:12.188167   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m02
	I1209 22:52:12.188172   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.188180   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.188185   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.191780   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:12.388747   36778 request.go:632] Waited for 196.170639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:12.388823   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:12.388829   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.388856   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.388869   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.392301   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:12.392894   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:12.392921   36778 pod_ready.go:82] duration metric: took 399.892746ms for pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.392938   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.587836   36778 request.go:632] Waited for 194.810311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m03
	I1209 22:52:12.587925   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m03
	I1209 22:52:12.587934   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.587948   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.587958   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.591021   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:12.787947   36778 request.go:632] Waited for 196.297135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:12.788010   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:12.788016   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.788024   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.788032   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.791450   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:12.792173   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:12.792194   36778 pod_ready.go:82] duration metric: took 399.248841ms for pod "kube-apiserver-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.792210   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.988330   36778 request.go:632] Waited for 196.053217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193
	I1209 22:52:12.988409   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193
	I1209 22:52:12.988415   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.988423   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.988428   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.992155   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.188272   36778 request.go:632] Waited for 195.156662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:13.188340   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:13.188346   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.188354   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.188362   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.192008   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.192630   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:13.192650   36778 pod_ready.go:82] duration metric: took 400.432601ms for pod "kube-controller-manager-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.192661   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.388559   36778 request.go:632] Waited for 195.821537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m02
	I1209 22:52:13.388616   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m02
	I1209 22:52:13.388621   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.388629   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.388634   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.391883   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.587935   36778 request.go:632] Waited for 195.28191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:13.587989   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:13.587994   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.588007   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.588010   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.591630   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.592151   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:13.592169   36778 pod_ready.go:82] duration metric: took 399.499137ms for pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.592180   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.788332   36778 request.go:632] Waited for 196.084844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m03
	I1209 22:52:13.788412   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m03
	I1209 22:52:13.788419   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.788429   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.788435   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.792121   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.988484   36778 request.go:632] Waited for 195.461528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:13.988555   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:13.988567   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.988579   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.988589   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.992243   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.992809   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:13.992827   36778 pod_ready.go:82] duration metric: took 400.64066ms for pod "kube-controller-manager-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.992842   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lntbt" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.187961   36778 request.go:632] Waited for 195.049639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lntbt
	I1209 22:52:14.188050   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lntbt
	I1209 22:52:14.188058   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.188071   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.188080   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.191692   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:14.388730   36778 request.go:632] Waited for 196.239352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:14.388788   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:14.388802   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.388813   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.388817   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.392311   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:14.392971   36778 pod_ready.go:93] pod "kube-proxy-lntbt" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:14.392992   36778 pod_ready.go:82] duration metric: took 400.138793ms for pod "kube-proxy-lntbt" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.393007   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pr7zk" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.588013   36778 request.go:632] Waited for 194.93384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pr7zk
	I1209 22:52:14.588077   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pr7zk
	I1209 22:52:14.588082   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.588095   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.588102   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.591447   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:14.788698   36778 request.go:632] Waited for 196.390033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:14.788766   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:14.788775   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.788787   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.788800   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.792338   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:14.793156   36778 pod_ready.go:93] pod "kube-proxy-pr7zk" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:14.793181   36778 pod_ready.go:82] duration metric: took 400.165156ms for pod "kube-proxy-pr7zk" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.793195   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r8nhm" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.988348   36778 request.go:632] Waited for 195.014123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r8nhm
	I1209 22:52:14.988427   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r8nhm
	I1209 22:52:14.988434   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.988444   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.988457   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.993239   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:15.188292   36778 request.go:632] Waited for 194.264701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:15.188390   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:15.188403   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.188418   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.188429   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.192041   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.192565   36778 pod_ready.go:93] pod "kube-proxy-r8nhm" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:15.192584   36778 pod_ready.go:82] duration metric: took 399.381952ms for pod "kube-proxy-r8nhm" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.192595   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.388147   36778 request.go:632] Waited for 195.488765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193
	I1209 22:52:15.388224   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193
	I1209 22:52:15.388233   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.388240   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.388248   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.391603   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.588758   36778 request.go:632] Waited for 196.3144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:15.588837   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:15.588843   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.588850   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.588860   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.592681   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.593301   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:15.593327   36778 pod_ready.go:82] duration metric: took 400.724982ms for pod "kube-scheduler-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.593343   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.788627   36778 request.go:632] Waited for 195.204455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m02
	I1209 22:52:15.788686   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m02
	I1209 22:52:15.788691   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.788699   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.788704   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.792349   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.988329   36778 request.go:632] Waited for 195.36216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:15.988396   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:15.988402   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.988408   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.988412   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.991578   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.992400   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:15.992418   36778 pod_ready.go:82] duration metric: took 399.067203ms for pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.992428   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:16.188427   36778 request.go:632] Waited for 195.939633ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m03
	I1209 22:52:16.188480   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m03
	I1209 22:52:16.188489   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.188496   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.188501   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.192012   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:16.388006   36778 request.go:632] Waited for 195.368293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:16.388057   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:16.388062   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.388069   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.388073   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.392950   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:16.393391   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:16.393409   36778 pod_ready.go:82] duration metric: took 400.975145ms for pod "kube-scheduler-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:16.393420   36778 pod_ready.go:39] duration metric: took 5.202056835s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:52:16.393435   36778 api_server.go:52] waiting for apiserver process to appear ...
	I1209 22:52:16.393482   36778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 22:52:16.409725   36778 api_server.go:72] duration metric: took 23.485873684s to wait for apiserver process to appear ...
	I1209 22:52:16.409759   36778 api_server.go:88] waiting for apiserver healthz status ...
	I1209 22:52:16.409786   36778 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I1209 22:52:16.414224   36778 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I1209 22:52:16.414307   36778 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I1209 22:52:16.414316   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.414324   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.414330   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.415229   36778 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1209 22:52:16.415280   36778 api_server.go:141] control plane version: v1.31.2
	I1209 22:52:16.415291   36778 api_server.go:131] duration metric: took 5.527187ms to wait for apiserver health ...
	I1209 22:52:16.415298   36778 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 22:52:16.588740   36778 request.go:632] Waited for 173.378808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:16.588806   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:16.588811   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.588818   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.588822   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.595459   36778 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 22:52:16.602952   36778 system_pods.go:59] 24 kube-system pods found
	I1209 22:52:16.602979   36778 system_pods.go:61] "coredns-7c65d6cfc9-9792g" [61c6326d-4d13-49b7-9fae-c4f8906c3c7e] Running
	I1209 22:52:16.602985   36778 system_pods.go:61] "coredns-7c65d6cfc9-pftgv" [eb45cffe-b666-435b-ada3-605735ee43c1] Running
	I1209 22:52:16.602989   36778 system_pods.go:61] "etcd-ha-920193" [70c28c05-bf86-4a56-9863-0e836de39230] Running
	I1209 22:52:16.602993   36778 system_pods.go:61] "etcd-ha-920193-m02" [b1746065-266b-487c-8aa5-629d5b421a3b] Running
	I1209 22:52:16.602996   36778 system_pods.go:61] "etcd-ha-920193-m03" [f9f5fa3d-3d06-47a8-b3f2-ed544b1cfb74] Running
	I1209 22:52:16.603001   36778 system_pods.go:61] "kindnet-7bbbc" [76f14f4c-3759-45b3-8161-78d85c44b5a6] Running
	I1209 22:52:16.603004   36778 system_pods.go:61] "kindnet-drj9q" [662d116b-e7ec-437c-9eeb-207cee5beecd] Running
	I1209 22:52:16.603007   36778 system_pods.go:61] "kindnet-rcctv" [e9580e78-cfcb-45b6-9e90-f37ce3881c6b] Running
	I1209 22:52:16.603010   36778 system_pods.go:61] "kube-apiserver-ha-920193" [9c15e33e-f7c1-4b0b-84de-718043e0ea1b] Running
	I1209 22:52:16.603015   36778 system_pods.go:61] "kube-apiserver-ha-920193-m02" [d05fff94-7b3f-4127-8164-3c1a838bb71c] Running
	I1209 22:52:16.603018   36778 system_pods.go:61] "kube-apiserver-ha-920193-m03" [1a55c551-c7bf-4b9f-ac49-e750cec2a92f] Running
	I1209 22:52:16.603022   36778 system_pods.go:61] "kube-controller-manager-ha-920193" [4a411c41-cba8-4cb3-beec-fcd3f9a8ade1] Running
	I1209 22:52:16.603026   36778 system_pods.go:61] "kube-controller-manager-ha-920193-m02" [57181858-4c99-4afa-b94c-4ba320fc293d] Running
	I1209 22:52:16.603031   36778 system_pods.go:61] "kube-controller-manager-ha-920193-m03" [d5278ea9-8878-4c18-bb6a-abb82a5bde12] Running
	I1209 22:52:16.603035   36778 system_pods.go:61] "kube-proxy-lntbt" [8eb5538d-7ac0-4949-b7f9-29d5530f4521] Running
	I1209 22:52:16.603038   36778 system_pods.go:61] "kube-proxy-pr7zk" [8236ddcf-f836-449c-8401-f799dd1a94f8] Running
	I1209 22:52:16.603041   36778 system_pods.go:61] "kube-proxy-r8nhm" [355570de-1c30-4aa2-a56c-a06639cc339c] Running
	I1209 22:52:16.603044   36778 system_pods.go:61] "kube-scheduler-ha-920193" [60b5be1e-f9fe-4c30-b435-7d321c484bc4] Running
	I1209 22:52:16.603047   36778 system_pods.go:61] "kube-scheduler-ha-920193-m02" [a35e79b6-6cf2-4669-95fb-c1f358c811d2] Running
	I1209 22:52:16.603050   36778 system_pods.go:61] "kube-scheduler-ha-920193-m03" [27607c05-10b3-4d82-ba5e-2f3e7a44d2ad] Running
	I1209 22:52:16.603054   36778 system_pods.go:61] "kube-vip-ha-920193" [4f238d3a-6219-4e4d-89a8-22f73def8440] Running
	I1209 22:52:16.603057   36778 system_pods.go:61] "kube-vip-ha-920193-m02" [0fe8360d-b418-47dd-bcce-a8d5a6c4c29d] Running
	I1209 22:52:16.603060   36778 system_pods.go:61] "kube-vip-ha-920193-m03" [7a747c56-20d6-41bb-b7c1-1db054ee699c] Running
	I1209 22:52:16.603062   36778 system_pods.go:61] "storage-provisioner" [cb31984d-7602-406c-8c77-ce8571cdaa52] Running
	I1209 22:52:16.603068   36778 system_pods.go:74] duration metric: took 187.765008ms to wait for pod list to return data ...
	I1209 22:52:16.603077   36778 default_sa.go:34] waiting for default service account to be created ...
	I1209 22:52:16.788510   36778 request.go:632] Waited for 185.359314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1209 22:52:16.788566   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1209 22:52:16.788571   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.788579   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.788586   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.791991   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:16.792139   36778 default_sa.go:45] found service account: "default"
	I1209 22:52:16.792154   36778 default_sa.go:55] duration metric: took 189.072143ms for default service account to be created ...
	I1209 22:52:16.792164   36778 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 22:52:16.988637   36778 request.go:632] Waited for 196.396881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:16.988723   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:16.988732   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.988740   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.988743   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.995659   36778 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 22:52:17.002627   36778 system_pods.go:86] 24 kube-system pods found
	I1209 22:52:17.002660   36778 system_pods.go:89] "coredns-7c65d6cfc9-9792g" [61c6326d-4d13-49b7-9fae-c4f8906c3c7e] Running
	I1209 22:52:17.002667   36778 system_pods.go:89] "coredns-7c65d6cfc9-pftgv" [eb45cffe-b666-435b-ada3-605735ee43c1] Running
	I1209 22:52:17.002672   36778 system_pods.go:89] "etcd-ha-920193" [70c28c05-bf86-4a56-9863-0e836de39230] Running
	I1209 22:52:17.002676   36778 system_pods.go:89] "etcd-ha-920193-m02" [b1746065-266b-487c-8aa5-629d5b421a3b] Running
	I1209 22:52:17.002679   36778 system_pods.go:89] "etcd-ha-920193-m03" [f9f5fa3d-3d06-47a8-b3f2-ed544b1cfb74] Running
	I1209 22:52:17.002683   36778 system_pods.go:89] "kindnet-7bbbc" [76f14f4c-3759-45b3-8161-78d85c44b5a6] Running
	I1209 22:52:17.002686   36778 system_pods.go:89] "kindnet-drj9q" [662d116b-e7ec-437c-9eeb-207cee5beecd] Running
	I1209 22:52:17.002690   36778 system_pods.go:89] "kindnet-rcctv" [e9580e78-cfcb-45b6-9e90-f37ce3881c6b] Running
	I1209 22:52:17.002693   36778 system_pods.go:89] "kube-apiserver-ha-920193" [9c15e33e-f7c1-4b0b-84de-718043e0ea1b] Running
	I1209 22:52:17.002697   36778 system_pods.go:89] "kube-apiserver-ha-920193-m02" [d05fff94-7b3f-4127-8164-3c1a838bb71c] Running
	I1209 22:52:17.002700   36778 system_pods.go:89] "kube-apiserver-ha-920193-m03" [1a55c551-c7bf-4b9f-ac49-e750cec2a92f] Running
	I1209 22:52:17.002703   36778 system_pods.go:89] "kube-controller-manager-ha-920193" [4a411c41-cba8-4cb3-beec-fcd3f9a8ade1] Running
	I1209 22:52:17.002707   36778 system_pods.go:89] "kube-controller-manager-ha-920193-m02" [57181858-4c99-4afa-b94c-4ba320fc293d] Running
	I1209 22:52:17.002710   36778 system_pods.go:89] "kube-controller-manager-ha-920193-m03" [d5278ea9-8878-4c18-bb6a-abb82a5bde12] Running
	I1209 22:52:17.002717   36778 system_pods.go:89] "kube-proxy-lntbt" [8eb5538d-7ac0-4949-b7f9-29d5530f4521] Running
	I1209 22:52:17.002720   36778 system_pods.go:89] "kube-proxy-pr7zk" [8236ddcf-f836-449c-8401-f799dd1a94f8] Running
	I1209 22:52:17.002723   36778 system_pods.go:89] "kube-proxy-r8nhm" [355570de-1c30-4aa2-a56c-a06639cc339c] Running
	I1209 22:52:17.002726   36778 system_pods.go:89] "kube-scheduler-ha-920193" [60b5be1e-f9fe-4c30-b435-7d321c484bc4] Running
	I1209 22:52:17.002730   36778 system_pods.go:89] "kube-scheduler-ha-920193-m02" [a35e79b6-6cf2-4669-95fb-c1f358c811d2] Running
	I1209 22:52:17.002734   36778 system_pods.go:89] "kube-scheduler-ha-920193-m03" [27607c05-10b3-4d82-ba5e-2f3e7a44d2ad] Running
	I1209 22:52:17.002738   36778 system_pods.go:89] "kube-vip-ha-920193" [4f238d3a-6219-4e4d-89a8-22f73def8440] Running
	I1209 22:52:17.002740   36778 system_pods.go:89] "kube-vip-ha-920193-m02" [0fe8360d-b418-47dd-bcce-a8d5a6c4c29d] Running
	I1209 22:52:17.002744   36778 system_pods.go:89] "kube-vip-ha-920193-m03" [7a747c56-20d6-41bb-b7c1-1db054ee699c] Running
	I1209 22:52:17.002747   36778 system_pods.go:89] "storage-provisioner" [cb31984d-7602-406c-8c77-ce8571cdaa52] Running
	I1209 22:52:17.002753   36778 system_pods.go:126] duration metric: took 210.583954ms to wait for k8s-apps to be running ...
	I1209 22:52:17.002760   36778 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 22:52:17.002802   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:52:17.018265   36778 system_svc.go:56] duration metric: took 15.492212ms WaitForService to wait for kubelet
	I1209 22:52:17.018301   36778 kubeadm.go:582] duration metric: took 24.09445385s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 22:52:17.018323   36778 node_conditions.go:102] verifying NodePressure condition ...
	I1209 22:52:17.188743   36778 request.go:632] Waited for 170.323133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I1209 22:52:17.188800   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I1209 22:52:17.188807   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:17.188816   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:17.188823   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:17.193008   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:17.194620   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:52:17.194642   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:52:17.194653   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:52:17.194657   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:52:17.194661   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:52:17.194664   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:52:17.194668   36778 node_conditions.go:105] duration metric: took 176.339707ms to run NodePressure ...
	I1209 22:52:17.194678   36778 start.go:241] waiting for startup goroutines ...
	I1209 22:52:17.194700   36778 start.go:255] writing updated cluster config ...
	I1209 22:52:17.194994   36778 ssh_runner.go:195] Run: rm -f paused
	I1209 22:52:17.247192   36778 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 22:52:17.250117   36778 out.go:177] * Done! kubectl is now configured to use "ha-920193" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.670058762Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784956670032907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93cf2f5d-5c2b-4606-944a-25eec1a8d4bb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.670518790Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89967025-6e0f-4ac1-92a0-e4543db44aad name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.670587461Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89967025-6e0f-4ac1-92a0-e4543db44aad name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.671346232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2098445c3438f8b0d5a49c50ab807d4692815a97d8732b20b47c76ff9381a76,PodSandboxId:32c399f593c298fd59b6593cd718d6d5d6899cacbe2ca9a451eae0cce6586d32,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733784740856215420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4dbs2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 713f8602-6e31-4d2c-b66e-ddcc856f3d96,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c,PodSandboxId:28a5e497d421cd446ab1fa053ce2a1d367f385d8d7480426d8bdb7269cb0ac01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606136476368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9792g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61c6326d-4d13-49b7-9fae-c4f8906c3c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a,PodSandboxId:8986bab4f9538153cc0450face96ec55ad84843bf30b83618639ea1d89ad002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606113487368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pftgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
eb45cffe-b666-435b-ada3-605735ee43c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a62ed3f6ca851b9e241ceaebd0beefc01fd7dbe3845409d02a901b02942a75,PodSandboxId:24f95152f109449addd385310712e35defdffe2d831c55b614cb2f59527ce4e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733784605136123919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb31984d-7602-406c-8c77-ce8571cdaa52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a,PodSandboxId:91e324c9c3171a25cf06362a62d77103a523e6d0da9b6a82fa14b606f4828c5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733784593211202860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rcctv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9580e78-cfcb-45b6-9e90-f37ce3881c6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678,PodSandboxId:7d30b07a36a6c7c6f07f44ce91d2e958657f533c11a16e501836ebf4eab9c339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733784589
863424703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8nhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355570de-1c30-4aa2-a56c-a06639cc339c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845a7a9380501aac2e0992c1b802cf4bc3bd5aea4f43d2d2d4f10c946c1c66f,PodSandboxId:dcec6011252c41298b12a101fb51c8b91c1b38ea9bc0151b6980f69527186ed1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378458116
6459615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d35c5d117675d3f6b1e3496412ccf95,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581,PodSandboxId:a053c05339f972a11da8b5a3a39e9c14b9110864ea563f08bd07327fd6e6af10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733784578334605737,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47ddf9d5365fec6250c855056c8f531,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a,PodSandboxId:7dd45ba230f9006172f49368d56abbfa8ac1f3fd32cbd4ded30df3d9e90be698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733784578293475702,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5422d6506f773ffbf1f21f835466832,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9,PodSandboxId:5b9cd68863c1403e489bdf6102dc4a79d4207d1f0150b4a356a2e0ad3bfc1b7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733784578260543093,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953f998529f26865f22e300e139c6849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963,PodSandboxId:ba6c2156966ab967451ebfb65361e3245e35dc076d6271f2db162ddc9498d085,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733784578211411837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f1665c4aeae8e7befddb7a386efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89967025-6e0f-4ac1-92a0-e4543db44aad name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.710885848Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c2efeb1-b0d5-4349-a176-9b9c2f1a709f name=/runtime.v1.RuntimeService/Version
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.710974191Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c2efeb1-b0d5-4349-a176-9b9c2f1a709f name=/runtime.v1.RuntimeService/Version
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.712307415Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d5c60e0-24e7-4036-82e7-a5967b460d5d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.712791746Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784956712766480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d5c60e0-24e7-4036-82e7-a5967b460d5d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.713436589Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d83b0aea-f97f-4869-996a-336d7af376ef name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.713498898Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d83b0aea-f97f-4869-996a-336d7af376ef name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.713770264Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2098445c3438f8b0d5a49c50ab807d4692815a97d8732b20b47c76ff9381a76,PodSandboxId:32c399f593c298fd59b6593cd718d6d5d6899cacbe2ca9a451eae0cce6586d32,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733784740856215420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4dbs2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 713f8602-6e31-4d2c-b66e-ddcc856f3d96,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c,PodSandboxId:28a5e497d421cd446ab1fa053ce2a1d367f385d8d7480426d8bdb7269cb0ac01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606136476368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9792g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61c6326d-4d13-49b7-9fae-c4f8906c3c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a,PodSandboxId:8986bab4f9538153cc0450face96ec55ad84843bf30b83618639ea1d89ad002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606113487368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pftgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
eb45cffe-b666-435b-ada3-605735ee43c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a62ed3f6ca851b9e241ceaebd0beefc01fd7dbe3845409d02a901b02942a75,PodSandboxId:24f95152f109449addd385310712e35defdffe2d831c55b614cb2f59527ce4e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733784605136123919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb31984d-7602-406c-8c77-ce8571cdaa52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a,PodSandboxId:91e324c9c3171a25cf06362a62d77103a523e6d0da9b6a82fa14b606f4828c5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733784593211202860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rcctv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9580e78-cfcb-45b6-9e90-f37ce3881c6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678,PodSandboxId:7d30b07a36a6c7c6f07f44ce91d2e958657f533c11a16e501836ebf4eab9c339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733784589
863424703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8nhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355570de-1c30-4aa2-a56c-a06639cc339c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845a7a9380501aac2e0992c1b802cf4bc3bd5aea4f43d2d2d4f10c946c1c66f,PodSandboxId:dcec6011252c41298b12a101fb51c8b91c1b38ea9bc0151b6980f69527186ed1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378458116
6459615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d35c5d117675d3f6b1e3496412ccf95,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581,PodSandboxId:a053c05339f972a11da8b5a3a39e9c14b9110864ea563f08bd07327fd6e6af10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733784578334605737,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47ddf9d5365fec6250c855056c8f531,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a,PodSandboxId:7dd45ba230f9006172f49368d56abbfa8ac1f3fd32cbd4ded30df3d9e90be698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733784578293475702,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5422d6506f773ffbf1f21f835466832,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9,PodSandboxId:5b9cd68863c1403e489bdf6102dc4a79d4207d1f0150b4a356a2e0ad3bfc1b7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733784578260543093,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953f998529f26865f22e300e139c6849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963,PodSandboxId:ba6c2156966ab967451ebfb65361e3245e35dc076d6271f2db162ddc9498d085,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733784578211411837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f1665c4aeae8e7befddb7a386efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d83b0aea-f97f-4869-996a-336d7af376ef name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.752620601Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0aa527e3-ae00-4837-b9be-c26693c891f1 name=/runtime.v1.RuntimeService/Version
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.752756045Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0aa527e3-ae00-4837-b9be-c26693c891f1 name=/runtime.v1.RuntimeService/Version
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.753824118Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7bd597d9-6831-45ad-87b7-94772aee2c7d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.754261562Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784956754238620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7bd597d9-6831-45ad-87b7-94772aee2c7d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.755211932Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aea8e7a7-54e1-4a05-8665-2d91b1f024de name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.755281529Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aea8e7a7-54e1-4a05-8665-2d91b1f024de name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.755518692Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2098445c3438f8b0d5a49c50ab807d4692815a97d8732b20b47c76ff9381a76,PodSandboxId:32c399f593c298fd59b6593cd718d6d5d6899cacbe2ca9a451eae0cce6586d32,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733784740856215420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4dbs2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 713f8602-6e31-4d2c-b66e-ddcc856f3d96,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c,PodSandboxId:28a5e497d421cd446ab1fa053ce2a1d367f385d8d7480426d8bdb7269cb0ac01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606136476368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9792g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61c6326d-4d13-49b7-9fae-c4f8906c3c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a,PodSandboxId:8986bab4f9538153cc0450face96ec55ad84843bf30b83618639ea1d89ad002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606113487368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pftgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
eb45cffe-b666-435b-ada3-605735ee43c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a62ed3f6ca851b9e241ceaebd0beefc01fd7dbe3845409d02a901b02942a75,PodSandboxId:24f95152f109449addd385310712e35defdffe2d831c55b614cb2f59527ce4e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733784605136123919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb31984d-7602-406c-8c77-ce8571cdaa52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a,PodSandboxId:91e324c9c3171a25cf06362a62d77103a523e6d0da9b6a82fa14b606f4828c5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733784593211202860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rcctv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9580e78-cfcb-45b6-9e90-f37ce3881c6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678,PodSandboxId:7d30b07a36a6c7c6f07f44ce91d2e958657f533c11a16e501836ebf4eab9c339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733784589
863424703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8nhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355570de-1c30-4aa2-a56c-a06639cc339c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845a7a9380501aac2e0992c1b802cf4bc3bd5aea4f43d2d2d4f10c946c1c66f,PodSandboxId:dcec6011252c41298b12a101fb51c8b91c1b38ea9bc0151b6980f69527186ed1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378458116
6459615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d35c5d117675d3f6b1e3496412ccf95,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581,PodSandboxId:a053c05339f972a11da8b5a3a39e9c14b9110864ea563f08bd07327fd6e6af10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733784578334605737,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47ddf9d5365fec6250c855056c8f531,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a,PodSandboxId:7dd45ba230f9006172f49368d56abbfa8ac1f3fd32cbd4ded30df3d9e90be698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733784578293475702,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5422d6506f773ffbf1f21f835466832,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9,PodSandboxId:5b9cd68863c1403e489bdf6102dc4a79d4207d1f0150b4a356a2e0ad3bfc1b7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733784578260543093,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953f998529f26865f22e300e139c6849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963,PodSandboxId:ba6c2156966ab967451ebfb65361e3245e35dc076d6271f2db162ddc9498d085,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733784578211411837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f1665c4aeae8e7befddb7a386efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aea8e7a7-54e1-4a05-8665-2d91b1f024de name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.796491695Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d6db45b5-f3bb-4450-9492-7621ef3f46e0 name=/runtime.v1.RuntimeService/Version
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.796624864Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6db45b5-f3bb-4450-9492-7621ef3f46e0 name=/runtime.v1.RuntimeService/Version
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.798332664Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=01870a57-7064-4070-ab1b-56aa9025dd6e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.798985377Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784956798955504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01870a57-7064-4070-ab1b-56aa9025dd6e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.799648709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ed6e629-684c-49e0-96b8-36fb248d3950 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.799748828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ed6e629-684c-49e0-96b8-36fb248d3950 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:55:56 ha-920193 crio[663]: time="2024-12-09 22:55:56.799985451Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2098445c3438f8b0d5a49c50ab807d4692815a97d8732b20b47c76ff9381a76,PodSandboxId:32c399f593c298fd59b6593cd718d6d5d6899cacbe2ca9a451eae0cce6586d32,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733784740856215420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4dbs2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 713f8602-6e31-4d2c-b66e-ddcc856f3d96,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c,PodSandboxId:28a5e497d421cd446ab1fa053ce2a1d367f385d8d7480426d8bdb7269cb0ac01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606136476368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9792g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61c6326d-4d13-49b7-9fae-c4f8906c3c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a,PodSandboxId:8986bab4f9538153cc0450face96ec55ad84843bf30b83618639ea1d89ad002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606113487368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pftgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
eb45cffe-b666-435b-ada3-605735ee43c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a62ed3f6ca851b9e241ceaebd0beefc01fd7dbe3845409d02a901b02942a75,PodSandboxId:24f95152f109449addd385310712e35defdffe2d831c55b614cb2f59527ce4e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733784605136123919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb31984d-7602-406c-8c77-ce8571cdaa52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a,PodSandboxId:91e324c9c3171a25cf06362a62d77103a523e6d0da9b6a82fa14b606f4828c5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733784593211202860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rcctv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9580e78-cfcb-45b6-9e90-f37ce3881c6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678,PodSandboxId:7d30b07a36a6c7c6f07f44ce91d2e958657f533c11a16e501836ebf4eab9c339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733784589
863424703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8nhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355570de-1c30-4aa2-a56c-a06639cc339c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845a7a9380501aac2e0992c1b802cf4bc3bd5aea4f43d2d2d4f10c946c1c66f,PodSandboxId:dcec6011252c41298b12a101fb51c8b91c1b38ea9bc0151b6980f69527186ed1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378458116
6459615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d35c5d117675d3f6b1e3496412ccf95,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581,PodSandboxId:a053c05339f972a11da8b5a3a39e9c14b9110864ea563f08bd07327fd6e6af10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733784578334605737,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47ddf9d5365fec6250c855056c8f531,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a,PodSandboxId:7dd45ba230f9006172f49368d56abbfa8ac1f3fd32cbd4ded30df3d9e90be698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733784578293475702,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5422d6506f773ffbf1f21f835466832,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9,PodSandboxId:5b9cd68863c1403e489bdf6102dc4a79d4207d1f0150b4a356a2e0ad3bfc1b7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733784578260543093,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953f998529f26865f22e300e139c6849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963,PodSandboxId:ba6c2156966ab967451ebfb65361e3245e35dc076d6271f2db162ddc9498d085,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733784578211411837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f1665c4aeae8e7befddb7a386efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ed6e629-684c-49e0-96b8-36fb248d3950 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b2098445c3438       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   32c399f593c29       busybox-7dff88458-4dbs2
	14b80feac0f9a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   28a5e497d421c       coredns-7c65d6cfc9-9792g
	6bdcee2ff30bb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   8986bab4f9538       coredns-7c65d6cfc9-pftgv
	a6a62ed3f6ca8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   24f95152f1094       storage-provisioner
	d26f562ad5527       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3    6 minutes ago       Running             kindnet-cni               0                   91e324c9c3171       kindnet-rcctv
	233aa49869db4       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   7d30b07a36a6c       kube-proxy-r8nhm
	b845a7a938050       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   dcec6011252c4       kube-vip-ha-920193
	2c5a043b38715       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   a053c05339f97       kube-apiserver-ha-920193
	f0a29f1dc44e4       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   7dd45ba230f90       kube-controller-manager-ha-920193
	b8197a166eeaf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   5b9cd68863c14       etcd-ha-920193
	6ee0fecee78f0       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   ba6c2156966ab       kube-scheduler-ha-920193
	
	
	==> coredns [14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c] <==
	[INFO] 10.244.2.2:60285 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00013048s
	[INFO] 10.244.0.4:42105 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201273s
	[INFO] 10.244.0.4:33722 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003973627s
	[INFO] 10.244.0.4:50780 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003385872s
	[INFO] 10.244.0.4:46762 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000330906s
	[INFO] 10.244.0.4:41821 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099413s
	[INFO] 10.244.1.2:38814 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000240081s
	[INFO] 10.244.1.2:51472 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001124121s
	[INFO] 10.244.1.2:49496 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094508s
	[INFO] 10.244.2.2:44597 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168981s
	[INFO] 10.244.2.2:56334 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001450617s
	[INFO] 10.244.2.2:52317 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077228s
	[INFO] 10.244.0.4:57299 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133066s
	[INFO] 10.244.0.4:56277 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119106s
	[INFO] 10.244.0.4:45466 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000040838s
	[INFO] 10.244.1.2:44460 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200839s
	[INFO] 10.244.2.2:38498 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135133s
	[INFO] 10.244.2.2:50433 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00021653s
	[INFO] 10.244.2.2:49338 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098224s
	[INFO] 10.244.0.4:33757 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178322s
	[INFO] 10.244.0.4:48357 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000197259s
	[INFO] 10.244.0.4:36014 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000126459s
	[INFO] 10.244.1.2:50940 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000306385s
	[INFO] 10.244.2.2:39693 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191708s
	[INFO] 10.244.2.2:43130 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156713s
	
	
	==> coredns [6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a] <==
	[INFO] 10.244.2.2:53803 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001802154s
	[INFO] 10.244.0.4:53804 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136883s
	[INFO] 10.244.0.4:33536 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133128s
	[INFO] 10.244.0.4:40697 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109987s
	[INFO] 10.244.1.2:60686 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001746087s
	[INFO] 10.244.1.2:57981 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000176425s
	[INFO] 10.244.1.2:42922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001279s
	[INFO] 10.244.1.2:49248 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000199359s
	[INFO] 10.244.1.2:56349 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176613s
	[INFO] 10.244.2.2:37288 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194316s
	[INFO] 10.244.2.2:36807 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001853178s
	[INFO] 10.244.2.2:47892 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097133s
	[INFO] 10.244.2.2:50492 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000249713s
	[INFO] 10.244.2.2:42642 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102673s
	[INFO] 10.244.0.4:45744 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170409s
	[INFO] 10.244.1.2:36488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227015s
	[INFO] 10.244.1.2:37416 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118932s
	[INFO] 10.244.1.2:48536 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176061s
	[INFO] 10.244.2.2:47072 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110597s
	[INFO] 10.244.0.4:58052 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000268133s
	[INFO] 10.244.1.2:38540 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000277422s
	[INFO] 10.244.1.2:55804 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000232786s
	[INFO] 10.244.1.2:35281 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000214405s
	[INFO] 10.244.2.2:37415 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174588s
	[INFO] 10.244.2.2:32790 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097554s
	
	
	==> describe nodes <==
	Name:               ha-920193
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-920193
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=ha-920193
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T22_49_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 22:49:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-920193
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 22:55:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 22:52:47 +0000   Mon, 09 Dec 2024 22:49:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 22:52:47 +0000   Mon, 09 Dec 2024 22:49:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 22:52:47 +0000   Mon, 09 Dec 2024 22:49:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 22:52:47 +0000   Mon, 09 Dec 2024 22:50:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-920193
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9825096d628741caa811f99c10cc6460
	  System UUID:                9825096d-6287-41ca-a811-f99c10cc6460
	  Boot ID:                    7af2b544-54c4-4e33-8dc8-e2313bb29389
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4dbs2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 coredns-7c65d6cfc9-9792g             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m8s
	  kube-system                 coredns-7c65d6cfc9-pftgv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m8s
	  kube-system                 etcd-ha-920193                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m13s
	  kube-system                 kindnet-rcctv                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m9s
	  kube-system                 kube-apiserver-ha-920193             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-controller-manager-ha-920193    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-proxy-r8nhm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 kube-scheduler-ha-920193             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-vip-ha-920193                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m6s   kube-proxy       
	  Normal  Starting                 6m13s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m13s  kubelet          Node ha-920193 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m13s  kubelet          Node ha-920193 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m13s  kubelet          Node ha-920193 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m9s   node-controller  Node ha-920193 event: Registered Node ha-920193 in Controller
	  Normal  NodeReady                5m53s  kubelet          Node ha-920193 status is now: NodeReady
	  Normal  RegisteredNode           5m14s  node-controller  Node ha-920193 event: Registered Node ha-920193 in Controller
	  Normal  RegisteredNode           4m     node-controller  Node ha-920193 event: Registered Node ha-920193 in Controller
	
	
	Name:               ha-920193-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-920193-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=ha-920193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T22_50_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 22:50:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-920193-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 22:53:29 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 09 Dec 2024 22:52:38 +0000   Mon, 09 Dec 2024 22:54:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 09 Dec 2024 22:52:38 +0000   Mon, 09 Dec 2024 22:54:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 09 Dec 2024 22:52:38 +0000   Mon, 09 Dec 2024 22:54:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 09 Dec 2024 22:52:38 +0000   Mon, 09 Dec 2024 22:54:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    ha-920193-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 418684ffa8244b8180cf28f3a347b4c2
	  System UUID:                418684ff-a824-4b81-80cf-28f3a347b4c2
	  Boot ID:                    15131626-aa5d-4727-aedd-7039ff10fa6a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rkqdv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-920193-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m19s
	  kube-system                 kindnet-7bbbc                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m21s
	  kube-system                 kube-apiserver-ha-920193-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-controller-manager-ha-920193-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-proxy-lntbt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-scheduler-ha-920193-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-vip-ha-920193-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m18s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m21s (x8 over 5m22s)  kubelet          Node ha-920193-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m21s (x8 over 5m22s)  kubelet          Node ha-920193-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s (x7 over 5m22s)  kubelet          Node ha-920193-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-920193-m02 event: Registered Node ha-920193-m02 in Controller
	  Normal  RegisteredNode           5m14s                  node-controller  Node ha-920193-m02 event: Registered Node ha-920193-m02 in Controller
	  Normal  RegisteredNode           4m                     node-controller  Node ha-920193-m02 event: Registered Node ha-920193-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-920193-m02 status is now: NodeNotReady
	
	
	Name:               ha-920193-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-920193-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=ha-920193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T22_51_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 22:51:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-920193-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 22:55:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 22:52:50 +0000   Mon, 09 Dec 2024 22:51:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 22:52:50 +0000   Mon, 09 Dec 2024 22:51:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 22:52:50 +0000   Mon, 09 Dec 2024 22:51:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 22:52:50 +0000   Mon, 09 Dec 2024 22:52:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.45
	  Hostname:    ha-920193-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c09ac2bcafe5487187b79c07f4dd9720
	  System UUID:                c09ac2bc-afe5-4871-87b7-9c07f4dd9720
	  Boot ID:                    1fbc2da5-2f05-4c65-92cc-ea55dc184e77
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zshqx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-920193-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m6s
	  kube-system                 kindnet-drj9q                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m8s
	  kube-system                 kube-apiserver-ha-920193-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-controller-manager-ha-920193-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-proxy-pr7zk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-ha-920193-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-vip-ha-920193-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  CIDRAssignmentFailed     4m8s                 cidrAllocator    Node ha-920193-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node ha-920193-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node ha-920193-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node ha-920193-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-920193-m03 event: Registered Node ha-920193-m03 in Controller
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-920193-m03 event: Registered Node ha-920193-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-920193-m03 event: Registered Node ha-920193-m03 in Controller
	
	
	Name:               ha-920193-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-920193-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=ha-920193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T22_52_56_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 22:52:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-920193-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 22:55:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 22:53:25 +0000   Mon, 09 Dec 2024 22:52:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 22:53:25 +0000   Mon, 09 Dec 2024 22:52:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 22:53:25 +0000   Mon, 09 Dec 2024 22:52:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 22:53:25 +0000   Mon, 09 Dec 2024 22:53:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.98
	  Hostname:    ha-920193-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4a2dbc042e3045febd5c0c9d1b2c22ec
	  System UUID:                4a2dbc04-2e30-45fe-bd5c-0c9d1b2c22ec
	  Boot ID:                    1261e6c2-362c-4edd-9457-2b833cda280a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4pzwv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m2s
	  kube-system                 kube-proxy-7d45n    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  CIDRAssignmentFailed     3m2s                 cidrAllocator    Node ha-920193-m04 status is now: CIDRAssignmentFailed
	  Normal  Starting                 3m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m2s)  kubelet          Node ha-920193-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m2s)  kubelet          Node ha-920193-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m2s)  kubelet          Node ha-920193-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m                   node-controller  Node ha-920193-m04 event: Registered Node ha-920193-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-920193-m04 event: Registered Node ha-920193-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-920193-m04 event: Registered Node ha-920193-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-920193-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 9 22:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049320] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036387] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.848109] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.938823] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.563382] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.738770] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.057878] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055312] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.165760] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.148687] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.252407] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.807769] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +4.142269] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.067556] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.253709] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.082838] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.454038] kauditd_printk_skb: 21 callbacks suppressed
	[Dec 9 22:50] kauditd_printk_skb: 40 callbacks suppressed
	[ +36.675272] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9] <==
	{"level":"warn","ts":"2024-12-09T22:55:56.949625Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.012765Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.045910Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.049804Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.054928Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.058311Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.067752Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.075843Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.082148Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.086032Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.089040Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.095150Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.101934Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.107218Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.110225Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.113587Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.121095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.126840Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.132083Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.135994Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.138862Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.142116Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.147296Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.148768Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:55:57.152955Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 22:55:57 up 6 min,  0 users,  load average: 0.21, 0.22, 0.11
	Linux ha-920193 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a] <==
	I1209 22:55:24.240338       1 main.go:324] Node ha-920193-m03 has CIDR [10.244.2.0/24] 
	I1209 22:55:34.244268       1 main.go:297] Handling node with IPs: map[192.168.39.98:{}]
	I1209 22:55:34.244372       1 main.go:324] Node ha-920193-m04 has CIDR [10.244.3.0/24] 
	I1209 22:55:34.244633       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1209 22:55:34.244725       1 main.go:301] handling current node
	I1209 22:55:34.244752       1 main.go:297] Handling node with IPs: map[192.168.39.43:{}]
	I1209 22:55:34.244770       1 main.go:324] Node ha-920193-m02 has CIDR [10.244.1.0/24] 
	I1209 22:55:34.244900       1 main.go:297] Handling node with IPs: map[192.168.39.45:{}]
	I1209 22:55:34.244924       1 main.go:324] Node ha-920193-m03 has CIDR [10.244.2.0/24] 
	I1209 22:55:44.241125       1 main.go:297] Handling node with IPs: map[192.168.39.45:{}]
	I1209 22:55:44.241179       1 main.go:324] Node ha-920193-m03 has CIDR [10.244.2.0/24] 
	I1209 22:55:44.241517       1 main.go:297] Handling node with IPs: map[192.168.39.98:{}]
	I1209 22:55:44.241554       1 main.go:324] Node ha-920193-m04 has CIDR [10.244.3.0/24] 
	I1209 22:55:44.242208       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1209 22:55:44.242246       1 main.go:301] handling current node
	I1209 22:55:44.242264       1 main.go:297] Handling node with IPs: map[192.168.39.43:{}]
	I1209 22:55:44.242279       1 main.go:324] Node ha-920193-m02 has CIDR [10.244.1.0/24] 
	I1209 22:55:54.237055       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1209 22:55:54.237098       1 main.go:301] handling current node
	I1209 22:55:54.237112       1 main.go:297] Handling node with IPs: map[192.168.39.43:{}]
	I1209 22:55:54.237117       1 main.go:324] Node ha-920193-m02 has CIDR [10.244.1.0/24] 
	I1209 22:55:54.237320       1 main.go:297] Handling node with IPs: map[192.168.39.45:{}]
	I1209 22:55:54.237342       1 main.go:324] Node ha-920193-m03 has CIDR [10.244.2.0/24] 
	I1209 22:55:54.237447       1 main.go:297] Handling node with IPs: map[192.168.39.98:{}]
	I1209 22:55:54.237463       1 main.go:324] Node ha-920193-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581] <==
	W1209 22:49:43.150982       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102]
	I1209 22:49:43.152002       1 controller.go:615] quota admission added evaluator for: endpoints
	I1209 22:49:43.156330       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 22:49:43.387632       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1209 22:49:44.564732       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1209 22:49:44.579130       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1209 22:49:44.588831       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1209 22:49:48.591895       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1209 22:49:48.841334       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1209 22:52:22.354256       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36040: use of closed network connection
	E1209 22:52:22.536970       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36064: use of closed network connection
	E1209 22:52:22.712523       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36088: use of closed network connection
	E1209 22:52:22.898417       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36102: use of closed network connection
	E1209 22:52:23.071122       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36126: use of closed network connection
	E1209 22:52:23.250546       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36138: use of closed network connection
	E1209 22:52:23.423505       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36152: use of closed network connection
	E1209 22:52:23.596493       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36174: use of closed network connection
	E1209 22:52:23.770267       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36200: use of closed network connection
	E1209 22:52:24.059362       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36220: use of closed network connection
	E1209 22:52:24.222108       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36234: use of closed network connection
	E1209 22:52:24.394542       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36254: use of closed network connection
	E1209 22:52:24.570825       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36280: use of closed network connection
	E1209 22:52:24.742045       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36308: use of closed network connection
	E1209 22:52:24.918566       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36330: use of closed network connection
	W1209 22:53:53.164722       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.45]
	
	
	==> kube-controller-manager [f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a] <==
	I1209 22:52:55.696316       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	E1209 22:52:55.827513       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"d21ce5c2-c9ae-46d3-8e56-962d14b633c9\", ResourceVersion:\"913\", Generation:1, CreationTimestamp:time.Date(2024, time.December, 9, 22, 49, 45, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\\\
",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20241108-5c6d2daf\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\\
\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00247f6a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\
"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0026282e8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolume
ClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002628300), EmptyDir:(*v1.EmptyDirVolumeSource)
(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.Portworx
VolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002628318), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), Az
ureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20241108-5c6d2daf\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc00247f6c0)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarS
ource)(0xc00247f700)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:fals
e, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc00298a060), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralCont
ainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc002895a00), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002509e80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), O
verhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0027a7a80)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002895a3c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1209 22:52:55.828552       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"6fe45e3d-72f3-4c58-8284-ee89d6d57a36\", ResourceVersion:\"871\", Generation:1, CreationTimestamp:time.Date(2024, time.December, 9, 22, 49, 44, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00197c7a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\"
, Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)
(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00265ecc0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002193ae8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolume
Source)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVol
umeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002193b00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtual
DiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.31.2\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc00197c7e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Reso
urceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"
/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0026ee600), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc002860a98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0025a4880), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostA
lias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002693bd0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002860af0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled
on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1209 22:52:56.102815       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:57.678400       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:58.159889       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:58.160065       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-920193-m04"
	I1209 22:52:58.180925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:58.828069       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:58.908919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:05.805409       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:16.012967       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-920193-m04"
	I1209 22:53:16.013430       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:16.029012       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:17.646042       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:25.994489       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:54:12.667473       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-920193-m04"
	I1209 22:54:12.668375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m02"
	I1209 22:54:12.690072       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m02"
	I1209 22:54:12.722935       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.821273ms"
	I1209 22:54:12.724268       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.814µs"
	I1209 22:54:13.270393       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m02"
	I1209 22:54:17.915983       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m02"
	
	
	==> kube-proxy [233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 22:49:50.258403       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 22:49:50.274620       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	E1209 22:49:50.274749       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 22:49:50.309286       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 22:49:50.309340       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 22:49:50.309367       1 server_linux.go:169] "Using iptables Proxier"
	I1209 22:49:50.311514       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 22:49:50.312044       1 server.go:483] "Version info" version="v1.31.2"
	I1209 22:49:50.312073       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 22:49:50.314372       1 config.go:199] "Starting service config controller"
	I1209 22:49:50.314401       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 22:49:50.314584       1 config.go:105] "Starting endpoint slice config controller"
	I1209 22:49:50.314607       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 22:49:50.315221       1 config.go:328] "Starting node config controller"
	I1209 22:49:50.315250       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 22:49:50.415190       1 shared_informer.go:320] Caches are synced for service config
	I1209 22:49:50.415151       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 22:49:50.415308       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963] <==
	W1209 22:49:42.622383       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 22:49:42.622920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 22:49:42.673980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1209 22:49:42.674373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 22:49:42.700294       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 22:49:42.700789       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1209 22:49:44.393323       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1209 22:52:18.167059       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rkqdv\": pod busybox-7dff88458-rkqdv is already assigned to node \"ha-920193-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rkqdv" node="ha-920193-m02"
	E1209 22:52:18.167170       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c1517f25-fc19-4255-b4c6-9a02511b80c3(default/busybox-7dff88458-rkqdv) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-rkqdv"
	E1209 22:52:18.167196       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rkqdv\": pod busybox-7dff88458-rkqdv is already assigned to node \"ha-920193-m02\"" pod="default/busybox-7dff88458-rkqdv"
	I1209 22:52:18.167215       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rkqdv" node="ha-920193-m02"
	E1209 22:52:55.621239       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x5mqb\": pod kindnet-x5mqb is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-x5mqb" node="ha-920193-m04"
	E1209 22:52:55.621341       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x5mqb\": pod kindnet-x5mqb is already assigned to node \"ha-920193-m04\"" pod="kube-system/kindnet-x5mqb"
	E1209 22:52:55.648021       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-k5v9w\": pod kube-proxy-k5v9w is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-k5v9w" node="ha-920193-m04"
	E1209 22:52:55.648095       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5882629a-a929-45e4-b026-e75a2c17d56d(kube-system/kube-proxy-k5v9w) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-k5v9w"
	E1209 22:52:55.648113       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-k5v9w\": pod kube-proxy-k5v9w is already assigned to node \"ha-920193-m04\"" pod="kube-system/kube-proxy-k5v9w"
	I1209 22:52:55.648138       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-k5v9w" node="ha-920193-m04"
	E1209 22:52:55.758943       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mp7q7\": pod kube-proxy-mp7q7 is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mp7q7" node="ha-920193-m04"
	E1209 22:52:55.759080       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a4d32bae-6ec6-4338-8689-3b32518b021b(kube-system/kube-proxy-mp7q7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-mp7q7"
	E1209 22:52:55.759142       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mp7q7\": pod kube-proxy-mp7q7 is already assigned to node \"ha-920193-m04\"" pod="kube-system/kube-proxy-mp7q7"
	I1209 22:52:55.759188       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-mp7q7" node="ha-920193-m04"
	E1209 22:52:55.775999       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7d45n\": pod kube-proxy-7d45n is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7d45n" node="ha-920193-m04"
	E1209 22:52:55.776095       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7d45n\": pod kube-proxy-7d45n is already assigned to node \"ha-920193-m04\"" pod="kube-system/kube-proxy-7d45n"
	E1209 22:52:55.784854       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4pzwv\": pod kindnet-4pzwv is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4pzwv" node="ha-920193-m04"
	E1209 22:52:55.785146       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4pzwv\": pod kindnet-4pzwv is already assigned to node \"ha-920193-m04\"" pod="kube-system/kindnet-4pzwv"
	
	
	==> kubelet <==
	Dec 09 22:54:44 ha-920193 kubelet[1302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 22:54:44 ha-920193 kubelet[1302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 22:54:44 ha-920193 kubelet[1302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 22:54:44 ha-920193 kubelet[1302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 22:54:44 ha-920193 kubelet[1302]: E1209 22:54:44.581397    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784884581065881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:54:44 ha-920193 kubelet[1302]: E1209 22:54:44.581439    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784884581065881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:54:54 ha-920193 kubelet[1302]: E1209 22:54:54.583096    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784894582573620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:54:54 ha-920193 kubelet[1302]: E1209 22:54:54.583476    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784894582573620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:04 ha-920193 kubelet[1302]: E1209 22:55:04.587043    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784904586404563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:04 ha-920193 kubelet[1302]: E1209 22:55:04.587520    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784904586404563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:14 ha-920193 kubelet[1302]: E1209 22:55:14.590203    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784914589554601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:14 ha-920193 kubelet[1302]: E1209 22:55:14.590522    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784914589554601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:24 ha-920193 kubelet[1302]: E1209 22:55:24.593898    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784924593226467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:24 ha-920193 kubelet[1302]: E1209 22:55:24.593942    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784924593226467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:34 ha-920193 kubelet[1302]: E1209 22:55:34.596079    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784934595337713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:34 ha-920193 kubelet[1302]: E1209 22:55:34.596564    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784934595337713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:44 ha-920193 kubelet[1302]: E1209 22:55:44.520346    1302 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 09 22:55:44 ha-920193 kubelet[1302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 22:55:44 ha-920193 kubelet[1302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 22:55:44 ha-920193 kubelet[1302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 22:55:44 ha-920193 kubelet[1302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 22:55:44 ha-920193 kubelet[1302]: E1209 22:55:44.598917    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784944598396332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:44 ha-920193 kubelet[1302]: E1209 22:55:44.598999    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784944598396332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:54 ha-920193 kubelet[1302]: E1209 22:55:54.601949    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784954601550343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:54 ha-920193 kubelet[1302]: E1209 22:55:54.602225    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784954601550343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-920193 -n ha-920193
helpers_test.go:261: (dbg) Run:  kubectl --context ha-920193 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.374465538s)
ha_test.go:415: expected profile "ha-920193" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-920193\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-920193\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-920193\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.102\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.43\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.45\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.98\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt
\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",
\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-920193 -n ha-920193
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-920193 logs -n 25: (1.367538997s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3609340860/001/cp-test_ha-920193-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193:/home/docker/cp-test_ha-920193-m03_ha-920193.txt                       |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193 sudo cat                                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m03_ha-920193.txt                                 |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m02:/home/docker/cp-test_ha-920193-m03_ha-920193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m02 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m03_ha-920193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04:/home/docker/cp-test_ha-920193-m03_ha-920193-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m04 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m03_ha-920193-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-920193 cp testdata/cp-test.txt                                                | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3609340860/001/cp-test_ha-920193-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193:/home/docker/cp-test_ha-920193-m04_ha-920193.txt                       |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193 sudo cat                                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m04_ha-920193.txt                                 |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m02:/home/docker/cp-test_ha-920193-m04_ha-920193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m02 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m04_ha-920193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03:/home/docker/cp-test_ha-920193-m04_ha-920193-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m03 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m04_ha-920193-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-920193 node stop m02 -v=7                                                     | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 22:49:03
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 22:49:03.145250   36778 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:49:03.145390   36778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:49:03.145399   36778 out.go:358] Setting ErrFile to fd 2...
	I1209 22:49:03.145404   36778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:49:03.145610   36778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 22:49:03.146205   36778 out.go:352] Setting JSON to false
	I1209 22:49:03.147113   36778 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5494,"bootTime":1733779049,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 22:49:03.147209   36778 start.go:139] virtualization: kvm guest
	I1209 22:49:03.149227   36778 out.go:177] * [ha-920193] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 22:49:03.150446   36778 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 22:49:03.150468   36778 notify.go:220] Checking for updates...
	I1209 22:49:03.152730   36778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 22:49:03.153842   36778 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:49:03.154957   36778 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:03.156087   36778 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 22:49:03.157179   36778 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 22:49:03.158417   36778 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 22:49:03.193867   36778 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 22:49:03.195030   36778 start.go:297] selected driver: kvm2
	I1209 22:49:03.195046   36778 start.go:901] validating driver "kvm2" against <nil>
	I1209 22:49:03.195060   36778 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 22:49:03.196334   36778 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:49:03.196484   36778 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 22:49:03.213595   36778 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 22:49:03.213648   36778 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 22:49:03.213994   36778 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 22:49:03.214030   36778 cni.go:84] Creating CNI manager for ""
	I1209 22:49:03.214072   36778 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1209 22:49:03.214085   36778 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 22:49:03.214141   36778 start.go:340] cluster config:
	{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1209 22:49:03.214261   36778 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:49:03.215829   36778 out.go:177] * Starting "ha-920193" primary control-plane node in "ha-920193" cluster
	I1209 22:49:03.216947   36778 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:49:03.216988   36778 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 22:49:03.217002   36778 cache.go:56] Caching tarball of preloaded images
	I1209 22:49:03.217077   36778 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 22:49:03.217091   36778 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 22:49:03.217507   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:49:03.217534   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json: {Name:mk69f8481a2f9361b3b46196caa6653a8d77a9fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:03.217729   36778 start.go:360] acquireMachinesLock for ha-920193: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 22:49:03.217779   36778 start.go:364] duration metric: took 30.111µs to acquireMachinesLock for "ha-920193"
	I1209 22:49:03.217805   36778 start.go:93] Provisioning new machine with config: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:49:03.217887   36778 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 22:49:03.219504   36778 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 22:49:03.219675   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:03.219709   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:03.234776   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39205
	I1209 22:49:03.235235   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:03.235843   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:03.235867   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:03.236261   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:03.236466   36778 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:49:03.236632   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:03.236794   36778 start.go:159] libmachine.API.Create for "ha-920193" (driver="kvm2")
	I1209 22:49:03.236821   36778 client.go:168] LocalClient.Create starting
	I1209 22:49:03.236862   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem
	I1209 22:49:03.236900   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:49:03.236922   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:49:03.237001   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem
	I1209 22:49:03.237033   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:49:03.237054   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:49:03.237078   36778 main.go:141] libmachine: Running pre-create checks...
	I1209 22:49:03.237090   36778 main.go:141] libmachine: (ha-920193) Calling .PreCreateCheck
	I1209 22:49:03.237426   36778 main.go:141] libmachine: (ha-920193) Calling .GetConfigRaw
	I1209 22:49:03.237793   36778 main.go:141] libmachine: Creating machine...
	I1209 22:49:03.237806   36778 main.go:141] libmachine: (ha-920193) Calling .Create
	I1209 22:49:03.237934   36778 main.go:141] libmachine: (ha-920193) Creating KVM machine...
	I1209 22:49:03.239483   36778 main.go:141] libmachine: (ha-920193) DBG | found existing default KVM network
	I1209 22:49:03.240340   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.240142   36801 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I1209 22:49:03.240365   36778 main.go:141] libmachine: (ha-920193) DBG | created network xml: 
	I1209 22:49:03.240393   36778 main.go:141] libmachine: (ha-920193) DBG | <network>
	I1209 22:49:03.240407   36778 main.go:141] libmachine: (ha-920193) DBG |   <name>mk-ha-920193</name>
	I1209 22:49:03.240417   36778 main.go:141] libmachine: (ha-920193) DBG |   <dns enable='no'/>
	I1209 22:49:03.240427   36778 main.go:141] libmachine: (ha-920193) DBG |   
	I1209 22:49:03.240438   36778 main.go:141] libmachine: (ha-920193) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1209 22:49:03.240454   36778 main.go:141] libmachine: (ha-920193) DBG |     <dhcp>
	I1209 22:49:03.240491   36778 main.go:141] libmachine: (ha-920193) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1209 22:49:03.240508   36778 main.go:141] libmachine: (ha-920193) DBG |     </dhcp>
	I1209 22:49:03.240522   36778 main.go:141] libmachine: (ha-920193) DBG |   </ip>
	I1209 22:49:03.240532   36778 main.go:141] libmachine: (ha-920193) DBG |   
	I1209 22:49:03.240542   36778 main.go:141] libmachine: (ha-920193) DBG | </network>
	I1209 22:49:03.240557   36778 main.go:141] libmachine: (ha-920193) DBG | 
	I1209 22:49:03.245903   36778 main.go:141] libmachine: (ha-920193) DBG | trying to create private KVM network mk-ha-920193 192.168.39.0/24...
	I1209 22:49:03.312870   36778 main.go:141] libmachine: (ha-920193) DBG | private KVM network mk-ha-920193 192.168.39.0/24 created
	I1209 22:49:03.312901   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.312803   36801 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:03.312925   36778 main.go:141] libmachine: (ha-920193) Setting up store path in /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193 ...
	I1209 22:49:03.312938   36778 main.go:141] libmachine: (ha-920193) Building disk image from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 22:49:03.312960   36778 main.go:141] libmachine: (ha-920193) Downloading /home/jenkins/minikube-integration/19888-18950/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 22:49:03.559720   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.559511   36801 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa...
	I1209 22:49:03.632777   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.632628   36801 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/ha-920193.rawdisk...
	I1209 22:49:03.632808   36778 main.go:141] libmachine: (ha-920193) DBG | Writing magic tar header
	I1209 22:49:03.632868   36778 main.go:141] libmachine: (ha-920193) DBG | Writing SSH key tar header
	I1209 22:49:03.632897   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.632735   36801 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193 ...
	I1209 22:49:03.632914   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193 (perms=drwx------)
	I1209 22:49:03.632931   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines (perms=drwxr-xr-x)
	I1209 22:49:03.632938   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube (perms=drwxr-xr-x)
	I1209 22:49:03.632951   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950 (perms=drwxrwxr-x)
	I1209 22:49:03.632959   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 22:49:03.632968   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 22:49:03.632988   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193
	I1209 22:49:03.632996   36778 main.go:141] libmachine: (ha-920193) Creating domain...
	I1209 22:49:03.633013   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines
	I1209 22:49:03.633026   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:03.633034   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950
	I1209 22:49:03.633039   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 22:49:03.633046   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins
	I1209 22:49:03.633051   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home
	I1209 22:49:03.633058   36778 main.go:141] libmachine: (ha-920193) DBG | Skipping /home - not owner
	I1209 22:49:03.634033   36778 main.go:141] libmachine: (ha-920193) define libvirt domain using xml: 
	I1209 22:49:03.634053   36778 main.go:141] libmachine: (ha-920193) <domain type='kvm'>
	I1209 22:49:03.634063   36778 main.go:141] libmachine: (ha-920193)   <name>ha-920193</name>
	I1209 22:49:03.634077   36778 main.go:141] libmachine: (ha-920193)   <memory unit='MiB'>2200</memory>
	I1209 22:49:03.634087   36778 main.go:141] libmachine: (ha-920193)   <vcpu>2</vcpu>
	I1209 22:49:03.634099   36778 main.go:141] libmachine: (ha-920193)   <features>
	I1209 22:49:03.634108   36778 main.go:141] libmachine: (ha-920193)     <acpi/>
	I1209 22:49:03.634117   36778 main.go:141] libmachine: (ha-920193)     <apic/>
	I1209 22:49:03.634126   36778 main.go:141] libmachine: (ha-920193)     <pae/>
	I1209 22:49:03.634143   36778 main.go:141] libmachine: (ha-920193)     
	I1209 22:49:03.634155   36778 main.go:141] libmachine: (ha-920193)   </features>
	I1209 22:49:03.634163   36778 main.go:141] libmachine: (ha-920193)   <cpu mode='host-passthrough'>
	I1209 22:49:03.634172   36778 main.go:141] libmachine: (ha-920193)   
	I1209 22:49:03.634184   36778 main.go:141] libmachine: (ha-920193)   </cpu>
	I1209 22:49:03.634192   36778 main.go:141] libmachine: (ha-920193)   <os>
	I1209 22:49:03.634200   36778 main.go:141] libmachine: (ha-920193)     <type>hvm</type>
	I1209 22:49:03.634209   36778 main.go:141] libmachine: (ha-920193)     <boot dev='cdrom'/>
	I1209 22:49:03.634217   36778 main.go:141] libmachine: (ha-920193)     <boot dev='hd'/>
	I1209 22:49:03.634226   36778 main.go:141] libmachine: (ha-920193)     <bootmenu enable='no'/>
	I1209 22:49:03.634233   36778 main.go:141] libmachine: (ha-920193)   </os>
	I1209 22:49:03.634241   36778 main.go:141] libmachine: (ha-920193)   <devices>
	I1209 22:49:03.634250   36778 main.go:141] libmachine: (ha-920193)     <disk type='file' device='cdrom'>
	I1209 22:49:03.634279   36778 main.go:141] libmachine: (ha-920193)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/boot2docker.iso'/>
	I1209 22:49:03.634301   36778 main.go:141] libmachine: (ha-920193)       <target dev='hdc' bus='scsi'/>
	I1209 22:49:03.634316   36778 main.go:141] libmachine: (ha-920193)       <readonly/>
	I1209 22:49:03.634323   36778 main.go:141] libmachine: (ha-920193)     </disk>
	I1209 22:49:03.634332   36778 main.go:141] libmachine: (ha-920193)     <disk type='file' device='disk'>
	I1209 22:49:03.634344   36778 main.go:141] libmachine: (ha-920193)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 22:49:03.634359   36778 main.go:141] libmachine: (ha-920193)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/ha-920193.rawdisk'/>
	I1209 22:49:03.634367   36778 main.go:141] libmachine: (ha-920193)       <target dev='hda' bus='virtio'/>
	I1209 22:49:03.634375   36778 main.go:141] libmachine: (ha-920193)     </disk>
	I1209 22:49:03.634383   36778 main.go:141] libmachine: (ha-920193)     <interface type='network'>
	I1209 22:49:03.634391   36778 main.go:141] libmachine: (ha-920193)       <source network='mk-ha-920193'/>
	I1209 22:49:03.634409   36778 main.go:141] libmachine: (ha-920193)       <model type='virtio'/>
	I1209 22:49:03.634421   36778 main.go:141] libmachine: (ha-920193)     </interface>
	I1209 22:49:03.634431   36778 main.go:141] libmachine: (ha-920193)     <interface type='network'>
	I1209 22:49:03.634442   36778 main.go:141] libmachine: (ha-920193)       <source network='default'/>
	I1209 22:49:03.634452   36778 main.go:141] libmachine: (ha-920193)       <model type='virtio'/>
	I1209 22:49:03.634463   36778 main.go:141] libmachine: (ha-920193)     </interface>
	I1209 22:49:03.634473   36778 main.go:141] libmachine: (ha-920193)     <serial type='pty'>
	I1209 22:49:03.634484   36778 main.go:141] libmachine: (ha-920193)       <target port='0'/>
	I1209 22:49:03.634498   36778 main.go:141] libmachine: (ha-920193)     </serial>
	I1209 22:49:03.634535   36778 main.go:141] libmachine: (ha-920193)     <console type='pty'>
	I1209 22:49:03.634561   36778 main.go:141] libmachine: (ha-920193)       <target type='serial' port='0'/>
	I1209 22:49:03.634581   36778 main.go:141] libmachine: (ha-920193)     </console>
	I1209 22:49:03.634592   36778 main.go:141] libmachine: (ha-920193)     <rng model='virtio'>
	I1209 22:49:03.634601   36778 main.go:141] libmachine: (ha-920193)       <backend model='random'>/dev/random</backend>
	I1209 22:49:03.634611   36778 main.go:141] libmachine: (ha-920193)     </rng>
	I1209 22:49:03.634621   36778 main.go:141] libmachine: (ha-920193)     
	I1209 22:49:03.634629   36778 main.go:141] libmachine: (ha-920193)     
	I1209 22:49:03.634634   36778 main.go:141] libmachine: (ha-920193)   </devices>
	I1209 22:49:03.634641   36778 main.go:141] libmachine: (ha-920193) </domain>
	I1209 22:49:03.634660   36778 main.go:141] libmachine: (ha-920193) 
	I1209 22:49:03.638977   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:88:5b:26 in network default
	I1209 22:49:03.639478   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:03.639517   36778 main.go:141] libmachine: (ha-920193) Ensuring networks are active...
	I1209 22:49:03.640151   36778 main.go:141] libmachine: (ha-920193) Ensuring network default is active
	I1209 22:49:03.640468   36778 main.go:141] libmachine: (ha-920193) Ensuring network mk-ha-920193 is active
	I1209 22:49:03.640970   36778 main.go:141] libmachine: (ha-920193) Getting domain xml...
	I1209 22:49:03.641682   36778 main.go:141] libmachine: (ha-920193) Creating domain...
	I1209 22:49:04.829698   36778 main.go:141] libmachine: (ha-920193) Waiting to get IP...
	I1209 22:49:04.830434   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:04.830835   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:04.830867   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:04.830824   36801 retry.go:31] will retry after 207.081791ms: waiting for machine to come up
	I1209 22:49:05.039144   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:05.039519   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:05.039585   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:05.039471   36801 retry.go:31] will retry after 281.967291ms: waiting for machine to come up
	I1209 22:49:05.322964   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:05.323366   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:05.323382   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:05.323322   36801 retry.go:31] will retry after 481.505756ms: waiting for machine to come up
	I1209 22:49:05.805961   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:05.806356   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:05.806376   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:05.806314   36801 retry.go:31] will retry after 549.592497ms: waiting for machine to come up
	I1209 22:49:06.357773   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:06.358284   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:06.358319   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:06.358243   36801 retry.go:31] will retry after 535.906392ms: waiting for machine to come up
	I1209 22:49:06.896232   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:06.896608   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:06.896631   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:06.896560   36801 retry.go:31] will retry after 874.489459ms: waiting for machine to come up
	I1209 22:49:07.772350   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:07.772754   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:07.772787   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:07.772706   36801 retry.go:31] will retry after 1.162571844s: waiting for machine to come up
	I1209 22:49:08.936520   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:08.936889   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:08.936917   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:08.936873   36801 retry.go:31] will retry after 1.45755084s: waiting for machine to come up
	I1209 22:49:10.396453   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:10.396871   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:10.396892   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:10.396843   36801 retry.go:31] will retry after 1.609479332s: waiting for machine to come up
	I1209 22:49:12.008693   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:12.009140   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:12.009166   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:12.009087   36801 retry.go:31] will retry after 2.268363531s: waiting for machine to come up
	I1209 22:49:14.279389   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:14.279856   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:14.279912   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:14.279851   36801 retry.go:31] will retry after 2.675009942s: waiting for machine to come up
	I1209 22:49:16.957696   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:16.958066   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:16.958096   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:16.958013   36801 retry.go:31] will retry after 2.665510056s: waiting for machine to come up
	I1209 22:49:19.624784   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:19.625187   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:19.625202   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:19.625166   36801 retry.go:31] will retry after 2.857667417s: waiting for machine to come up
	I1209 22:49:22.486137   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:22.486540   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:22.486563   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:22.486493   36801 retry.go:31] will retry after 4.026256687s: waiting for machine to come up
	I1209 22:49:26.516409   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.516832   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has current primary IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.516858   36778 main.go:141] libmachine: (ha-920193) Found IP for machine: 192.168.39.102
	I1209 22:49:26.516892   36778 main.go:141] libmachine: (ha-920193) Reserving static IP address...
	I1209 22:49:26.517220   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find host DHCP lease matching {name: "ha-920193", mac: "52:54:00:eb:3c:cb", ip: "192.168.39.102"} in network mk-ha-920193
	I1209 22:49:26.587512   36778 main.go:141] libmachine: (ha-920193) DBG | Getting to WaitForSSH function...
	I1209 22:49:26.587538   36778 main.go:141] libmachine: (ha-920193) Reserved static IP address: 192.168.39.102
	I1209 22:49:26.587551   36778 main.go:141] libmachine: (ha-920193) Waiting for SSH to be available...
	I1209 22:49:26.589724   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.590056   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:minikube Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:26.590080   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.590252   36778 main.go:141] libmachine: (ha-920193) DBG | Using SSH client type: external
	I1209 22:49:26.590281   36778 main.go:141] libmachine: (ha-920193) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa (-rw-------)
	I1209 22:49:26.590312   36778 main.go:141] libmachine: (ha-920193) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 22:49:26.590335   36778 main.go:141] libmachine: (ha-920193) DBG | About to run SSH command:
	I1209 22:49:26.590368   36778 main.go:141] libmachine: (ha-920193) DBG | exit 0
	I1209 22:49:26.707404   36778 main.go:141] libmachine: (ha-920193) DBG | SSH cmd err, output: <nil>: 
	I1209 22:49:26.707687   36778 main.go:141] libmachine: (ha-920193) KVM machine creation complete!
	I1209 22:49:26.708024   36778 main.go:141] libmachine: (ha-920193) Calling .GetConfigRaw
	I1209 22:49:26.708523   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:26.708739   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:26.708918   36778 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 22:49:26.708931   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:49:26.710113   36778 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 22:49:26.710125   36778 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 22:49:26.710130   36778 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 22:49:26.710135   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:26.712426   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.712765   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:26.712791   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.712925   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:26.713081   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.713185   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.713306   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:26.713452   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:26.713680   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:26.713692   36778 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 22:49:26.806695   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:49:26.806717   36778 main.go:141] libmachine: Detecting the provisioner...
	I1209 22:49:26.806725   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:26.809366   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.809767   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:26.809800   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.809958   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:26.810141   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.810311   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.810444   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:26.810627   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:26.810776   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:26.810787   36778 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 22:49:26.908040   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 22:49:26.908090   36778 main.go:141] libmachine: found compatible host: buildroot
	I1209 22:49:26.908097   36778 main.go:141] libmachine: Provisioning with buildroot...
	I1209 22:49:26.908104   36778 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:49:26.908364   36778 buildroot.go:166] provisioning hostname "ha-920193"
	I1209 22:49:26.908392   36778 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:49:26.908590   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:26.911118   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.911513   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:26.911538   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.911715   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:26.911868   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.911989   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.912100   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:26.912224   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:26.912420   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:26.912438   36778 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-920193 && echo "ha-920193" | sudo tee /etc/hostname
	I1209 22:49:27.020773   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-920193
	
	I1209 22:49:27.020799   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.023575   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.023846   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.023871   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.024029   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.024220   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.024374   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.024530   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.024691   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:27.024872   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:27.024888   36778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-920193' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-920193/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-920193' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 22:49:27.127613   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:49:27.127642   36778 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 22:49:27.127660   36778 buildroot.go:174] setting up certificates
	I1209 22:49:27.127691   36778 provision.go:84] configureAuth start
	I1209 22:49:27.127710   36778 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:49:27.127961   36778 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:49:27.130248   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.130591   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.130619   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.130738   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.132923   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.133247   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.133271   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.133422   36778 provision.go:143] copyHostCerts
	I1209 22:49:27.133461   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:49:27.133491   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 22:49:27.133506   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:49:27.133573   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 22:49:27.133653   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:49:27.133670   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 22:49:27.133677   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:49:27.133702   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 22:49:27.133745   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:49:27.133761   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 22:49:27.133767   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:49:27.133788   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 22:49:27.133835   36778 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.ha-920193 san=[127.0.0.1 192.168.39.102 ha-920193 localhost minikube]
	I1209 22:49:27.297434   36778 provision.go:177] copyRemoteCerts
	I1209 22:49:27.297494   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 22:49:27.297515   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.300069   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.300424   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.300443   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.300615   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.300792   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.300928   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.301029   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:27.378773   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 22:49:27.378830   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1209 22:49:27.403553   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 22:49:27.403627   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 22:49:27.425459   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 22:49:27.425526   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 22:49:27.449197   36778 provision.go:87] duration metric: took 321.487984ms to configureAuth
	I1209 22:49:27.449229   36778 buildroot.go:189] setting minikube options for container-runtime
	I1209 22:49:27.449449   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:49:27.449534   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.453191   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.453559   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.453595   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.453759   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.453939   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.454070   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.454184   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.454317   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:27.454513   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:27.454534   36778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 22:49:27.653703   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 22:49:27.653733   36778 main.go:141] libmachine: Checking connection to Docker...
	I1209 22:49:27.653756   36778 main.go:141] libmachine: (ha-920193) Calling .GetURL
	I1209 22:49:27.655032   36778 main.go:141] libmachine: (ha-920193) DBG | Using libvirt version 6000000
	I1209 22:49:27.657160   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.657463   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.657491   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.657682   36778 main.go:141] libmachine: Docker is up and running!
	I1209 22:49:27.657699   36778 main.go:141] libmachine: Reticulating splines...
	I1209 22:49:27.657708   36778 client.go:171] duration metric: took 24.420875377s to LocalClient.Create
	I1209 22:49:27.657735   36778 start.go:167] duration metric: took 24.420942176s to libmachine.API.Create "ha-920193"
	I1209 22:49:27.657747   36778 start.go:293] postStartSetup for "ha-920193" (driver="kvm2")
	I1209 22:49:27.657761   36778 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 22:49:27.657785   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.657983   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 22:49:27.658006   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.659917   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.660172   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.660200   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.660370   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.660519   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.660646   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.660782   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:27.737935   36778 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 22:49:27.741969   36778 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 22:49:27.741998   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 22:49:27.742081   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 22:49:27.742178   36778 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 22:49:27.742190   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /etc/ssl/certs/262532.pem
	I1209 22:49:27.742316   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 22:49:27.752769   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:49:27.776187   36778 start.go:296] duration metric: took 118.424893ms for postStartSetup
	I1209 22:49:27.776233   36778 main.go:141] libmachine: (ha-920193) Calling .GetConfigRaw
	I1209 22:49:27.776813   36778 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:49:27.779433   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.779777   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.779809   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.780018   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:49:27.780196   36778 start.go:128] duration metric: took 24.562298059s to createHost
	I1209 22:49:27.780219   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.782389   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.782713   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.782737   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.782928   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.783093   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.783255   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.783378   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.783531   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:27.783762   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:27.783780   36778 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 22:49:27.880035   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733784567.857266275
	
	I1209 22:49:27.880058   36778 fix.go:216] guest clock: 1733784567.857266275
	I1209 22:49:27.880065   36778 fix.go:229] Guest: 2024-12-09 22:49:27.857266275 +0000 UTC Remote: 2024-12-09 22:49:27.780207864 +0000 UTC m=+24.672894470 (delta=77.058411ms)
	I1209 22:49:27.880084   36778 fix.go:200] guest clock delta is within tolerance: 77.058411ms
	I1209 22:49:27.880088   36778 start.go:83] releasing machines lock for "ha-920193", held for 24.662297943s
	I1209 22:49:27.880110   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.880381   36778 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:49:27.883090   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.883418   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.883452   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.883630   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.884081   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.884211   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.884272   36778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 22:49:27.884329   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.884381   36778 ssh_runner.go:195] Run: cat /version.json
	I1209 22:49:27.884403   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.886622   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.886872   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.886899   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.886994   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.887039   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.887207   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.887321   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.887333   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.887353   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.887479   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:27.887529   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.887692   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.887829   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.887976   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:27.963462   36778 ssh_runner.go:195] Run: systemctl --version
	I1209 22:49:27.986028   36778 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 22:49:28.143161   36778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 22:49:28.149221   36778 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 22:49:28.149289   36778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 22:49:28.165410   36778 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 22:49:28.165442   36778 start.go:495] detecting cgroup driver to use...
	I1209 22:49:28.165509   36778 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 22:49:28.181384   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 22:49:28.195011   36778 docker.go:217] disabling cri-docker service (if available) ...
	I1209 22:49:28.195063   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 22:49:28.208554   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 22:49:28.222230   36778 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 22:49:28.338093   36778 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 22:49:28.483809   36778 docker.go:233] disabling docker service ...
	I1209 22:49:28.483868   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 22:49:28.497723   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 22:49:28.510133   36778 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 22:49:28.637703   36778 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 22:49:28.768621   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 22:49:28.781961   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 22:49:28.799140   36778 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 22:49:28.799205   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.808634   36778 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 22:49:28.808697   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.818355   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.827780   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.837191   36778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 22:49:28.846758   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.856291   36778 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.872403   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.881716   36778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 22:49:28.890298   36778 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 22:49:28.890355   36778 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 22:49:28.902738   36778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 22:49:28.911729   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:49:29.013922   36778 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 22:49:29.106638   36778 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 22:49:29.106719   36778 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 22:49:29.111193   36778 start.go:563] Will wait 60s for crictl version
	I1209 22:49:29.111261   36778 ssh_runner.go:195] Run: which crictl
	I1209 22:49:29.115298   36778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 22:49:29.151109   36778 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 22:49:29.151178   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:49:29.178245   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:49:29.206246   36778 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 22:49:29.207478   36778 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:49:29.209787   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:29.210134   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:29.210160   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:29.210332   36778 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 22:49:29.214243   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:49:29.226620   36778 kubeadm.go:883] updating cluster {Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 22:49:29.226723   36778 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:49:29.226766   36778 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 22:49:29.257928   36778 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 22:49:29.257999   36778 ssh_runner.go:195] Run: which lz4
	I1209 22:49:29.261848   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1209 22:49:29.261955   36778 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 22:49:29.265782   36778 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 22:49:29.265814   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 22:49:30.441006   36778 crio.go:462] duration metric: took 1.179084887s to copy over tarball
	I1209 22:49:30.441074   36778 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 22:49:32.468580   36778 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.027482243s)
	I1209 22:49:32.468624   36778 crio.go:469] duration metric: took 2.027585779s to extract the tarball
	I1209 22:49:32.468641   36778 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 22:49:32.505123   36778 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 22:49:32.547324   36778 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 22:49:32.547346   36778 cache_images.go:84] Images are preloaded, skipping loading
	I1209 22:49:32.547353   36778 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.31.2 crio true true} ...
	I1209 22:49:32.547438   36778 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-920193 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 22:49:32.547498   36778 ssh_runner.go:195] Run: crio config
	I1209 22:49:32.589945   36778 cni.go:84] Creating CNI manager for ""
	I1209 22:49:32.589970   36778 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 22:49:32.589982   36778 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 22:49:32.590011   36778 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-920193 NodeName:ha-920193 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 22:49:32.590137   36778 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-920193"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.102"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 22:49:32.590159   36778 kube-vip.go:115] generating kube-vip config ...
	I1209 22:49:32.590202   36778 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 22:49:32.605724   36778 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 22:49:32.605829   36778 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1209 22:49:32.605883   36778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 22:49:32.615285   36778 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 22:49:32.615345   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1209 22:49:32.624299   36778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1209 22:49:32.639876   36778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 22:49:32.656137   36778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1209 22:49:32.672494   36778 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1209 22:49:32.688039   36778 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 22:49:32.691843   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:49:32.703440   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:49:32.825661   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:49:32.842362   36778 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193 for IP: 192.168.39.102
	I1209 22:49:32.842387   36778 certs.go:194] generating shared ca certs ...
	I1209 22:49:32.842404   36778 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:32.842561   36778 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 22:49:32.842601   36778 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 22:49:32.842611   36778 certs.go:256] generating profile certs ...
	I1209 22:49:32.842674   36778 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key
	I1209 22:49:32.842693   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt with IP's: []
	I1209 22:49:32.980963   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt ...
	I1209 22:49:32.980992   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt: {Name:mkd9ec798303363f6538acfc05f1a5f04066e731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:32.981176   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key ...
	I1209 22:49:32.981188   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key: {Name:mk056f923a34783de09213845e376bddce6f3df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:32.981268   36778 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.ec574f19
	I1209 22:49:32.981285   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.ec574f19 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.254]
	I1209 22:49:33.242216   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.ec574f19 ...
	I1209 22:49:33.242250   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.ec574f19: {Name:mk7179026523f0b057d26b52e40a5885ad95d8ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:33.242434   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.ec574f19 ...
	I1209 22:49:33.242448   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.ec574f19: {Name:mk65609d82220269362f492c0a2d0cc4da8d1447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:33.242525   36778 certs.go:381] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.ec574f19 -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt
	I1209 22:49:33.242596   36778 certs.go:385] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.ec574f19 -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key
	I1209 22:49:33.242650   36778 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key
	I1209 22:49:33.242665   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt with IP's: []
	I1209 22:49:33.389277   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt ...
	I1209 22:49:33.389307   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt: {Name:mk8b70654b36de7093b054b1d0d39a95b39d45fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:33.389473   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key ...
	I1209 22:49:33.389485   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key: {Name:mk4ec3e3be54da03f1d1683c75f10f14c0904ad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:33.389559   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 22:49:33.389576   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 22:49:33.389587   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 22:49:33.389600   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 22:49:33.389610   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 22:49:33.389620   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 22:49:33.389632   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 22:49:33.389642   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 22:49:33.389693   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 22:49:33.389729   36778 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 22:49:33.389739   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 22:49:33.389758   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 22:49:33.389781   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 22:49:33.389801   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 22:49:33.389837   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:49:33.389863   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:49:33.389878   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem -> /usr/share/ca-certificates/26253.pem
	I1209 22:49:33.389890   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /usr/share/ca-certificates/262532.pem
	I1209 22:49:33.390445   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 22:49:33.414470   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 22:49:33.436920   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 22:49:33.458977   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 22:49:33.481846   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 22:49:33.503907   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 22:49:33.525852   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 22:49:33.548215   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 22:49:33.569802   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 22:49:33.602465   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 22:49:33.628007   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 22:49:33.653061   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 22:49:33.668632   36778 ssh_runner.go:195] Run: openssl version
	I1209 22:49:33.674257   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 22:49:33.684380   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 22:49:33.688650   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 22:49:33.688714   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 22:49:33.694036   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 22:49:33.704144   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 22:49:33.714060   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:49:33.718184   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:49:33.718227   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:49:33.723730   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 22:49:33.734203   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 22:49:33.744729   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 22:49:33.749033   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 22:49:33.749080   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 22:49:33.754563   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 22:49:33.764859   36778 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 22:49:33.768876   36778 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 22:49:33.768937   36778 kubeadm.go:392] StartCluster: {Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:49:33.769036   36778 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 22:49:33.769105   36778 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 22:49:33.804100   36778 cri.go:89] found id: ""
	I1209 22:49:33.804165   36778 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 22:49:33.814344   36778 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 22:49:33.824218   36778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 22:49:33.834084   36778 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 22:49:33.834106   36778 kubeadm.go:157] found existing configuration files:
	
	I1209 22:49:33.834157   36778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 22:49:33.843339   36778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 22:49:33.843379   36778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 22:49:33.853049   36778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 22:49:33.862222   36778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 22:49:33.862280   36778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 22:49:33.872041   36778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 22:49:33.881416   36778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 22:49:33.881475   36778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 22:49:33.891237   36778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 22:49:33.900609   36778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 22:49:33.900659   36778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 22:49:33.910089   36778 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 22:49:34.000063   36778 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 22:49:34.000183   36778 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 22:49:34.091544   36778 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 22:49:34.091739   36778 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 22:49:34.091892   36778 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 22:49:34.100090   36778 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 22:49:34.102871   36778 out.go:235]   - Generating certificates and keys ...
	I1209 22:49:34.103528   36778 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 22:49:34.103648   36778 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 22:49:34.284340   36778 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 22:49:34.462874   36778 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 22:49:34.647453   36778 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 22:49:34.787984   36778 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 22:49:35.020609   36778 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 22:49:35.020761   36778 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-920193 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I1209 22:49:35.078800   36778 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 22:49:35.078977   36778 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-920193 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I1209 22:49:35.150500   36778 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 22:49:35.230381   36778 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 22:49:35.499235   36778 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 22:49:35.499319   36778 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 22:49:35.912886   36778 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 22:49:36.241120   36778 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 22:49:36.405939   36778 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 22:49:36.604047   36778 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 22:49:36.814671   36778 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 22:49:36.815164   36778 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 22:49:36.818373   36778 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 22:49:36.820325   36778 out.go:235]   - Booting up control plane ...
	I1209 22:49:36.820430   36778 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 22:49:36.820522   36778 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 22:49:36.821468   36778 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 22:49:36.841330   36778 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 22:49:36.848308   36778 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 22:49:36.848421   36778 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 22:49:36.995410   36778 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 22:49:36.995535   36778 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 22:49:37.995683   36778 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001015441s
	I1209 22:49:37.995786   36778 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 22:49:43.754200   36778 kubeadm.go:310] [api-check] The API server is healthy after 5.761609039s
	I1209 22:49:43.767861   36778 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 22:49:43.785346   36778 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 22:49:43.810025   36778 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 22:49:43.810266   36778 kubeadm.go:310] [mark-control-plane] Marking the node ha-920193 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 22:49:43.821256   36778 kubeadm.go:310] [bootstrap-token] Using token: 72yxn0.qrsfcagkngfj4gxi
	I1209 22:49:43.822572   36778 out.go:235]   - Configuring RBAC rules ...
	I1209 22:49:43.822691   36778 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 22:49:43.832707   36778 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 22:49:43.844059   36778 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 22:49:43.846995   36778 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 22:49:43.849841   36778 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 22:49:43.856257   36778 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 22:49:44.160151   36778 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 22:49:44.591740   36778 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 22:49:45.161509   36778 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 22:49:45.162464   36778 kubeadm.go:310] 
	I1209 22:49:45.162543   36778 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 22:49:45.162552   36778 kubeadm.go:310] 
	I1209 22:49:45.162641   36778 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 22:49:45.162653   36778 kubeadm.go:310] 
	I1209 22:49:45.162689   36778 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 22:49:45.162763   36778 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 22:49:45.162845   36778 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 22:49:45.162856   36778 kubeadm.go:310] 
	I1209 22:49:45.162934   36778 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 22:49:45.162944   36778 kubeadm.go:310] 
	I1209 22:49:45.163005   36778 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 22:49:45.163016   36778 kubeadm.go:310] 
	I1209 22:49:45.163084   36778 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 22:49:45.163184   36778 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 22:49:45.163290   36778 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 22:49:45.163301   36778 kubeadm.go:310] 
	I1209 22:49:45.163412   36778 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 22:49:45.163482   36778 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 22:49:45.163488   36778 kubeadm.go:310] 
	I1209 22:49:45.163586   36778 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 72yxn0.qrsfcagkngfj4gxi \
	I1209 22:49:45.163727   36778 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b \
	I1209 22:49:45.163762   36778 kubeadm.go:310] 	--control-plane 
	I1209 22:49:45.163771   36778 kubeadm.go:310] 
	I1209 22:49:45.163891   36778 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 22:49:45.163902   36778 kubeadm.go:310] 
	I1209 22:49:45.164042   36778 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 72yxn0.qrsfcagkngfj4gxi \
	I1209 22:49:45.164198   36778 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b 
	I1209 22:49:45.164453   36778 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 22:49:45.164487   36778 cni.go:84] Creating CNI manager for ""
	I1209 22:49:45.164497   36778 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 22:49:45.166869   36778 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1209 22:49:45.168578   36778 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 22:49:45.173867   36778 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1209 22:49:45.173890   36778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 22:49:45.193577   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 22:49:45.540330   36778 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 22:49:45.540400   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:45.540429   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-920193 minikube.k8s.io/updated_at=2024_12_09T22_49_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=ha-920193 minikube.k8s.io/primary=true
	I1209 22:49:45.563713   36778 ops.go:34] apiserver oom_adj: -16
	I1209 22:49:45.755027   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:46.255384   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:46.755819   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:47.255436   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:47.755914   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:48.255404   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:48.755938   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:49.255745   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:49.346913   36778 kubeadm.go:1113] duration metric: took 3.806571287s to wait for elevateKubeSystemPrivileges
	I1209 22:49:49.346942   36778 kubeadm.go:394] duration metric: took 15.578011127s to StartCluster
	I1209 22:49:49.346958   36778 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:49.347032   36778 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:49:49.347686   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:49.347889   36778 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:49:49.347901   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 22:49:49.347912   36778 start.go:241] waiting for startup goroutines ...
	I1209 22:49:49.347916   36778 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 22:49:49.347997   36778 addons.go:69] Setting storage-provisioner=true in profile "ha-920193"
	I1209 22:49:49.348008   36778 addons.go:69] Setting default-storageclass=true in profile "ha-920193"
	I1209 22:49:49.348018   36778 addons.go:234] Setting addon storage-provisioner=true in "ha-920193"
	I1209 22:49:49.348025   36778 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-920193"
	I1209 22:49:49.348059   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:49:49.348092   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:49:49.348366   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.348401   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.348486   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.348504   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.364294   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41179
	I1209 22:49:49.364762   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40635
	I1209 22:49:49.364808   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.365192   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.365331   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.365359   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.365654   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.365671   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.365700   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.365855   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:49:49.366017   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.366436   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.366477   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.367841   36778 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:49:49.368072   36778 kapi.go:59] client config for ha-920193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key", CAFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c9a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 22:49:49.368506   36778 cert_rotation.go:140] Starting client certificate rotation controller
	I1209 22:49:49.368728   36778 addons.go:234] Setting addon default-storageclass=true in "ha-920193"
	I1209 22:49:49.368759   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:49:49.368995   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.369045   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.381548   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44341
	I1209 22:49:49.382048   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.382623   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.382650   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.382946   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.383123   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:49:49.384085   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35753
	I1209 22:49:49.384563   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.385002   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:49.385074   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.385099   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.385406   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.385869   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.385898   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.387093   36778 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 22:49:49.388363   36778 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 22:49:49.388378   36778 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 22:49:49.388396   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:49.391382   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:49.391959   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:49.391988   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:49.392168   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:49.392369   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:49.392529   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:49.392718   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:49.402583   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37505
	I1209 22:49:49.403101   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.403703   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.403733   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.404140   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.404327   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:49:49.406048   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:49.406246   36778 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 22:49:49.406264   36778 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 22:49:49.406283   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:49.409015   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:49.409417   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:49.409445   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:49.409566   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:49.409736   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:49.409906   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:49.410051   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:49.469421   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 22:49:49.523797   36778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 22:49:49.572493   36778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 22:49:49.935058   36778 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1209 22:49:50.246776   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.246808   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.246866   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.246889   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.247109   36778 main.go:141] libmachine: (ha-920193) DBG | Closing plugin on server side
	I1209 22:49:50.247126   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.247142   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.247149   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.247150   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.247161   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.247168   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.247161   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.247214   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.247452   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.247465   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.247474   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.247491   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.247524   36778 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 22:49:50.247539   36778 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 22:49:50.247452   36778 main.go:141] libmachine: (ha-920193) DBG | Closing plugin on server side
	I1209 22:49:50.247679   36778 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1209 22:49:50.247688   36778 round_trippers.go:469] Request Headers:
	I1209 22:49:50.247699   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:49:50.247705   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:49:50.258818   36778 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1209 22:49:50.259388   36778 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1209 22:49:50.259405   36778 round_trippers.go:469] Request Headers:
	I1209 22:49:50.259415   36778 round_trippers.go:473]     Content-Type: application/json
	I1209 22:49:50.259421   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:49:50.259427   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:49:50.263578   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:49:50.263947   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.263973   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.264222   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.264298   36778 main.go:141] libmachine: (ha-920193) DBG | Closing plugin on server side
	I1209 22:49:50.264309   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.266759   36778 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1209 22:49:50.268058   36778 addons.go:510] duration metric: took 920.142906ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 22:49:50.268097   36778 start.go:246] waiting for cluster config update ...
	I1209 22:49:50.268112   36778 start.go:255] writing updated cluster config ...
	I1209 22:49:50.269702   36778 out.go:201] 
	I1209 22:49:50.271046   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:49:50.271126   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:49:50.272711   36778 out.go:177] * Starting "ha-920193-m02" control-plane node in "ha-920193" cluster
	I1209 22:49:50.273838   36778 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:49:50.273861   36778 cache.go:56] Caching tarball of preloaded images
	I1209 22:49:50.273946   36778 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 22:49:50.273960   36778 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 22:49:50.274036   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:49:50.274220   36778 start.go:360] acquireMachinesLock for ha-920193-m02: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 22:49:50.274272   36778 start.go:364] duration metric: took 30.506µs to acquireMachinesLock for "ha-920193-m02"
	I1209 22:49:50.274296   36778 start.go:93] Provisioning new machine with config: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:49:50.274418   36778 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1209 22:49:50.275986   36778 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 22:49:50.276071   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:50.276101   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:50.290689   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42005
	I1209 22:49:50.291090   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:50.291624   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:50.291657   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:50.291974   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:50.292165   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetMachineName
	I1209 22:49:50.292290   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:49:50.292460   36778 start.go:159] libmachine.API.Create for "ha-920193" (driver="kvm2")
	I1209 22:49:50.292488   36778 client.go:168] LocalClient.Create starting
	I1209 22:49:50.292523   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem
	I1209 22:49:50.292562   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:49:50.292580   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:49:50.292650   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem
	I1209 22:49:50.292677   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:49:50.292694   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:49:50.292719   36778 main.go:141] libmachine: Running pre-create checks...
	I1209 22:49:50.292730   36778 main.go:141] libmachine: (ha-920193-m02) Calling .PreCreateCheck
	I1209 22:49:50.292863   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetConfigRaw
	I1209 22:49:50.293207   36778 main.go:141] libmachine: Creating machine...
	I1209 22:49:50.293220   36778 main.go:141] libmachine: (ha-920193-m02) Calling .Create
	I1209 22:49:50.293319   36778 main.go:141] libmachine: (ha-920193-m02) Creating KVM machine...
	I1209 22:49:50.294569   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found existing default KVM network
	I1209 22:49:50.294708   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found existing private KVM network mk-ha-920193
	I1209 22:49:50.294863   36778 main.go:141] libmachine: (ha-920193-m02) Setting up store path in /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02 ...
	I1209 22:49:50.294888   36778 main.go:141] libmachine: (ha-920193-m02) Building disk image from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 22:49:50.294937   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:50.294840   37166 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:50.295026   36778 main.go:141] libmachine: (ha-920193-m02) Downloading /home/jenkins/minikube-integration/19888-18950/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 22:49:50.540657   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:50.540505   37166 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa...
	I1209 22:49:50.636978   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:50.636881   37166 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/ha-920193-m02.rawdisk...
	I1209 22:49:50.637002   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Writing magic tar header
	I1209 22:49:50.637012   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Writing SSH key tar header
	I1209 22:49:50.637092   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:50.637012   37166 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02 ...
	I1209 22:49:50.637134   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02
	I1209 22:49:50.637167   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02 (perms=drwx------)
	I1209 22:49:50.637189   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines (perms=drwxr-xr-x)
	I1209 22:49:50.637210   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines
	I1209 22:49:50.637225   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube (perms=drwxr-xr-x)
	I1209 22:49:50.637240   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950 (perms=drwxrwxr-x)
	I1209 22:49:50.637251   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 22:49:50.637263   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 22:49:50.637274   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:50.637286   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950
	I1209 22:49:50.637298   36778 main.go:141] libmachine: (ha-920193-m02) Creating domain...
	I1209 22:49:50.637312   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 22:49:50.637321   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins
	I1209 22:49:50.637330   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home
	I1209 22:49:50.637341   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Skipping /home - not owner
	I1209 22:49:50.638225   36778 main.go:141] libmachine: (ha-920193-m02) define libvirt domain using xml: 
	I1209 22:49:50.638247   36778 main.go:141] libmachine: (ha-920193-m02) <domain type='kvm'>
	I1209 22:49:50.638255   36778 main.go:141] libmachine: (ha-920193-m02)   <name>ha-920193-m02</name>
	I1209 22:49:50.638263   36778 main.go:141] libmachine: (ha-920193-m02)   <memory unit='MiB'>2200</memory>
	I1209 22:49:50.638271   36778 main.go:141] libmachine: (ha-920193-m02)   <vcpu>2</vcpu>
	I1209 22:49:50.638284   36778 main.go:141] libmachine: (ha-920193-m02)   <features>
	I1209 22:49:50.638291   36778 main.go:141] libmachine: (ha-920193-m02)     <acpi/>
	I1209 22:49:50.638306   36778 main.go:141] libmachine: (ha-920193-m02)     <apic/>
	I1209 22:49:50.638319   36778 main.go:141] libmachine: (ha-920193-m02)     <pae/>
	I1209 22:49:50.638328   36778 main.go:141] libmachine: (ha-920193-m02)     
	I1209 22:49:50.638333   36778 main.go:141] libmachine: (ha-920193-m02)   </features>
	I1209 22:49:50.638340   36778 main.go:141] libmachine: (ha-920193-m02)   <cpu mode='host-passthrough'>
	I1209 22:49:50.638346   36778 main.go:141] libmachine: (ha-920193-m02)   
	I1209 22:49:50.638356   36778 main.go:141] libmachine: (ha-920193-m02)   </cpu>
	I1209 22:49:50.638364   36778 main.go:141] libmachine: (ha-920193-m02)   <os>
	I1209 22:49:50.638380   36778 main.go:141] libmachine: (ha-920193-m02)     <type>hvm</type>
	I1209 22:49:50.638393   36778 main.go:141] libmachine: (ha-920193-m02)     <boot dev='cdrom'/>
	I1209 22:49:50.638403   36778 main.go:141] libmachine: (ha-920193-m02)     <boot dev='hd'/>
	I1209 22:49:50.638426   36778 main.go:141] libmachine: (ha-920193-m02)     <bootmenu enable='no'/>
	I1209 22:49:50.638448   36778 main.go:141] libmachine: (ha-920193-m02)   </os>
	I1209 22:49:50.638464   36778 main.go:141] libmachine: (ha-920193-m02)   <devices>
	I1209 22:49:50.638475   36778 main.go:141] libmachine: (ha-920193-m02)     <disk type='file' device='cdrom'>
	I1209 22:49:50.638507   36778 main.go:141] libmachine: (ha-920193-m02)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/boot2docker.iso'/>
	I1209 22:49:50.638533   36778 main.go:141] libmachine: (ha-920193-m02)       <target dev='hdc' bus='scsi'/>
	I1209 22:49:50.638547   36778 main.go:141] libmachine: (ha-920193-m02)       <readonly/>
	I1209 22:49:50.638559   36778 main.go:141] libmachine: (ha-920193-m02)     </disk>
	I1209 22:49:50.638570   36778 main.go:141] libmachine: (ha-920193-m02)     <disk type='file' device='disk'>
	I1209 22:49:50.638583   36778 main.go:141] libmachine: (ha-920193-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 22:49:50.638601   36778 main.go:141] libmachine: (ha-920193-m02)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/ha-920193-m02.rawdisk'/>
	I1209 22:49:50.638612   36778 main.go:141] libmachine: (ha-920193-m02)       <target dev='hda' bus='virtio'/>
	I1209 22:49:50.638623   36778 main.go:141] libmachine: (ha-920193-m02)     </disk>
	I1209 22:49:50.638632   36778 main.go:141] libmachine: (ha-920193-m02)     <interface type='network'>
	I1209 22:49:50.638641   36778 main.go:141] libmachine: (ha-920193-m02)       <source network='mk-ha-920193'/>
	I1209 22:49:50.638652   36778 main.go:141] libmachine: (ha-920193-m02)       <model type='virtio'/>
	I1209 22:49:50.638661   36778 main.go:141] libmachine: (ha-920193-m02)     </interface>
	I1209 22:49:50.638672   36778 main.go:141] libmachine: (ha-920193-m02)     <interface type='network'>
	I1209 22:49:50.638680   36778 main.go:141] libmachine: (ha-920193-m02)       <source network='default'/>
	I1209 22:49:50.638690   36778 main.go:141] libmachine: (ha-920193-m02)       <model type='virtio'/>
	I1209 22:49:50.638708   36778 main.go:141] libmachine: (ha-920193-m02)     </interface>
	I1209 22:49:50.638726   36778 main.go:141] libmachine: (ha-920193-m02)     <serial type='pty'>
	I1209 22:49:50.638741   36778 main.go:141] libmachine: (ha-920193-m02)       <target port='0'/>
	I1209 22:49:50.638748   36778 main.go:141] libmachine: (ha-920193-m02)     </serial>
	I1209 22:49:50.638756   36778 main.go:141] libmachine: (ha-920193-m02)     <console type='pty'>
	I1209 22:49:50.638764   36778 main.go:141] libmachine: (ha-920193-m02)       <target type='serial' port='0'/>
	I1209 22:49:50.638775   36778 main.go:141] libmachine: (ha-920193-m02)     </console>
	I1209 22:49:50.638784   36778 main.go:141] libmachine: (ha-920193-m02)     <rng model='virtio'>
	I1209 22:49:50.638793   36778 main.go:141] libmachine: (ha-920193-m02)       <backend model='random'>/dev/random</backend>
	I1209 22:49:50.638807   36778 main.go:141] libmachine: (ha-920193-m02)     </rng>
	I1209 22:49:50.638821   36778 main.go:141] libmachine: (ha-920193-m02)     
	I1209 22:49:50.638836   36778 main.go:141] libmachine: (ha-920193-m02)     
	I1209 22:49:50.638854   36778 main.go:141] libmachine: (ha-920193-m02)   </devices>
	I1209 22:49:50.638870   36778 main.go:141] libmachine: (ha-920193-m02) </domain>
	I1209 22:49:50.638881   36778 main.go:141] libmachine: (ha-920193-m02) 
	I1209 22:49:50.645452   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:4e:0e:44 in network default
	I1209 22:49:50.646094   36778 main.go:141] libmachine: (ha-920193-m02) Ensuring networks are active...
	I1209 22:49:50.646118   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:50.646792   36778 main.go:141] libmachine: (ha-920193-m02) Ensuring network default is active
	I1209 22:49:50.647136   36778 main.go:141] libmachine: (ha-920193-m02) Ensuring network mk-ha-920193 is active
	I1209 22:49:50.647479   36778 main.go:141] libmachine: (ha-920193-m02) Getting domain xml...
	I1209 22:49:50.648166   36778 main.go:141] libmachine: (ha-920193-m02) Creating domain...
	I1209 22:49:51.846569   36778 main.go:141] libmachine: (ha-920193-m02) Waiting to get IP...
	I1209 22:49:51.847529   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:51.847984   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:51.848045   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:51.847987   37166 retry.go:31] will retry after 223.150886ms: waiting for machine to come up
	I1209 22:49:52.072674   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:52.073143   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:52.073214   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:52.073119   37166 retry.go:31] will retry after 342.157886ms: waiting for machine to come up
	I1209 22:49:52.416515   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:52.416911   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:52.416933   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:52.416873   37166 retry.go:31] will retry after 319.618715ms: waiting for machine to come up
	I1209 22:49:52.738511   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:52.739067   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:52.739096   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:52.739025   37166 retry.go:31] will retry after 426.813714ms: waiting for machine to come up
	I1209 22:49:53.167672   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:53.168111   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:53.168139   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:53.168063   37166 retry.go:31] will retry after 465.129361ms: waiting for machine to come up
	I1209 22:49:53.634495   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:53.635006   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:53.635033   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:53.634965   37166 retry.go:31] will retry after 688.228763ms: waiting for machine to come up
	I1209 22:49:54.324368   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:54.324751   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:54.324780   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:54.324706   37166 retry.go:31] will retry after 952.948713ms: waiting for machine to come up
	I1209 22:49:55.278732   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:55.279052   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:55.279084   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:55.279025   37166 retry.go:31] will retry after 1.032940312s: waiting for machine to come up
	I1209 22:49:56.313177   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:56.313589   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:56.313613   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:56.313562   37166 retry.go:31] will retry after 1.349167493s: waiting for machine to come up
	I1209 22:49:57.664618   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:57.665031   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:57.665060   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:57.664986   37166 retry.go:31] will retry after 1.512445541s: waiting for machine to come up
	I1209 22:49:59.179536   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:59.179914   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:59.179939   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:59.179864   37166 retry.go:31] will retry after 2.399970974s: waiting for machine to come up
	I1209 22:50:01.582227   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:01.582662   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:50:01.582690   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:50:01.582599   37166 retry.go:31] will retry after 2.728474301s: waiting for machine to come up
	I1209 22:50:04.312490   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:04.312880   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:50:04.312905   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:50:04.312847   37166 retry.go:31] will retry after 4.276505546s: waiting for machine to come up
	I1209 22:50:08.590485   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:08.590927   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:50:08.590949   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:50:08.590889   37166 retry.go:31] will retry after 4.29966265s: waiting for machine to come up
	I1209 22:50:12.892743   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:12.893228   36778 main.go:141] libmachine: (ha-920193-m02) Found IP for machine: 192.168.39.43
	I1209 22:50:12.893253   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has current primary IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:12.893261   36778 main.go:141] libmachine: (ha-920193-m02) Reserving static IP address...
	I1209 22:50:12.893598   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find host DHCP lease matching {name: "ha-920193-m02", mac: "52:54:00:e3:b9:61", ip: "192.168.39.43"} in network mk-ha-920193
	I1209 22:50:12.967208   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Getting to WaitForSSH function...
	I1209 22:50:12.967241   36778 main.go:141] libmachine: (ha-920193-m02) Reserved static IP address: 192.168.39.43
	I1209 22:50:12.967255   36778 main.go:141] libmachine: (ha-920193-m02) Waiting for SSH to be available...
	I1209 22:50:12.969615   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:12.969971   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:12.969998   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:12.970158   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Using SSH client type: external
	I1209 22:50:12.970180   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa (-rw-------)
	I1209 22:50:12.970211   36778 main.go:141] libmachine: (ha-920193-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 22:50:12.970224   36778 main.go:141] libmachine: (ha-920193-m02) DBG | About to run SSH command:
	I1209 22:50:12.970270   36778 main.go:141] libmachine: (ha-920193-m02) DBG | exit 0
	I1209 22:50:13.099696   36778 main.go:141] libmachine: (ha-920193-m02) DBG | SSH cmd err, output: <nil>: 
	I1209 22:50:13.100005   36778 main.go:141] libmachine: (ha-920193-m02) KVM machine creation complete!
	I1209 22:50:13.100244   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetConfigRaw
	I1209 22:50:13.100810   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:13.100988   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:13.101128   36778 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 22:50:13.101154   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetState
	I1209 22:50:13.102588   36778 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 22:50:13.102600   36778 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 22:50:13.102605   36778 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 22:50:13.102611   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.105041   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.105398   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.105421   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.105634   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.105791   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.105931   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.106034   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.106172   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.106381   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.106392   36778 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 22:50:13.214686   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:50:13.214707   36778 main.go:141] libmachine: Detecting the provisioner...
	I1209 22:50:13.214714   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.217518   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.217915   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.217939   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.218093   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.218249   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.218422   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.218594   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.218762   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.218925   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.218936   36778 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 22:50:13.328344   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 22:50:13.328426   36778 main.go:141] libmachine: found compatible host: buildroot
	I1209 22:50:13.328436   36778 main.go:141] libmachine: Provisioning with buildroot...
	I1209 22:50:13.328445   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetMachineName
	I1209 22:50:13.328699   36778 buildroot.go:166] provisioning hostname "ha-920193-m02"
	I1209 22:50:13.328724   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetMachineName
	I1209 22:50:13.328916   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.331720   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.332124   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.332160   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.332317   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.332518   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.332696   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.332887   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.333073   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.333230   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.333241   36778 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-920193-m02 && echo "ha-920193-m02" | sudo tee /etc/hostname
	I1209 22:50:13.453959   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-920193-m02
	
	I1209 22:50:13.453993   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.457007   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.457414   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.457445   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.457635   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.457816   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.457961   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.458096   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.458282   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.458465   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.458486   36778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-920193-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-920193-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-920193-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 22:50:13.575704   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:50:13.575734   36778 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 22:50:13.575756   36778 buildroot.go:174] setting up certificates
	I1209 22:50:13.575768   36778 provision.go:84] configureAuth start
	I1209 22:50:13.575777   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetMachineName
	I1209 22:50:13.576037   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetIP
	I1209 22:50:13.578662   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.579132   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.579159   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.579337   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.581290   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.581592   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.581613   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.581740   36778 provision.go:143] copyHostCerts
	I1209 22:50:13.581770   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:50:13.581820   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 22:50:13.581832   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:50:13.581924   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 22:50:13.582006   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:50:13.582026   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 22:50:13.582033   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:50:13.582058   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 22:50:13.582102   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:50:13.582122   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 22:50:13.582131   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:50:13.582166   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 22:50:13.582231   36778 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.ha-920193-m02 san=[127.0.0.1 192.168.39.43 ha-920193-m02 localhost minikube]
	I1209 22:50:13.756786   36778 provision.go:177] copyRemoteCerts
	I1209 22:50:13.756844   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 22:50:13.756875   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.759281   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.759620   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.759646   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.759868   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.760043   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.760166   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.760302   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa Username:docker}
	I1209 22:50:13.842746   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 22:50:13.842829   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 22:50:13.868488   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 22:50:13.868558   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 22:50:13.894237   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 22:50:13.894300   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 22:50:13.919207   36778 provision.go:87] duration metric: took 343.427038ms to configureAuth
	I1209 22:50:13.919237   36778 buildroot.go:189] setting minikube options for container-runtime
	I1209 22:50:13.919436   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:50:13.919529   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.922321   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.922667   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.922689   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.922943   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.923101   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.923227   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.923381   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.923527   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.923766   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.923783   36778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 22:50:14.145275   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 22:50:14.145304   36778 main.go:141] libmachine: Checking connection to Docker...
	I1209 22:50:14.145313   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetURL
	I1209 22:50:14.146583   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Using libvirt version 6000000
	I1209 22:50:14.148809   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.149140   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.149168   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.149302   36778 main.go:141] libmachine: Docker is up and running!
	I1209 22:50:14.149316   36778 main.go:141] libmachine: Reticulating splines...
	I1209 22:50:14.149322   36778 client.go:171] duration metric: took 23.856827848s to LocalClient.Create
	I1209 22:50:14.149351   36778 start.go:167] duration metric: took 23.856891761s to libmachine.API.Create "ha-920193"
	I1209 22:50:14.149370   36778 start.go:293] postStartSetup for "ha-920193-m02" (driver="kvm2")
	I1209 22:50:14.149387   36778 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 22:50:14.149412   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.149683   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 22:50:14.149706   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:14.152301   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.152593   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.152623   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.152758   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:14.152950   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.153102   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:14.153238   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa Username:docker}
	I1209 22:50:14.237586   36778 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 22:50:14.241320   36778 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 22:50:14.241353   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 22:50:14.241430   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 22:50:14.241512   36778 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 22:50:14.241522   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /etc/ssl/certs/262532.pem
	I1209 22:50:14.241599   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 22:50:14.250940   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:50:14.273559   36778 start.go:296] duration metric: took 124.171367ms for postStartSetup
	I1209 22:50:14.273622   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetConfigRaw
	I1209 22:50:14.274207   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetIP
	I1209 22:50:14.276819   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.277127   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.277156   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.277340   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:50:14.277540   36778 start.go:128] duration metric: took 24.003111268s to createHost
	I1209 22:50:14.277563   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:14.279937   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.280232   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.280257   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.280382   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:14.280557   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.280726   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.280910   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:14.281099   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:14.281291   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:14.281304   36778 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 22:50:14.388152   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733784614.364424625
	
	I1209 22:50:14.388174   36778 fix.go:216] guest clock: 1733784614.364424625
	I1209 22:50:14.388181   36778 fix.go:229] Guest: 2024-12-09 22:50:14.364424625 +0000 UTC Remote: 2024-12-09 22:50:14.27755238 +0000 UTC m=+71.170238927 (delta=86.872245ms)
	I1209 22:50:14.388195   36778 fix.go:200] guest clock delta is within tolerance: 86.872245ms
	I1209 22:50:14.388200   36778 start.go:83] releasing machines lock for "ha-920193-m02", held for 24.113917393s
	I1209 22:50:14.388222   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.388471   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetIP
	I1209 22:50:14.391084   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.391432   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.391458   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.393935   36778 out.go:177] * Found network options:
	I1209 22:50:14.395356   36778 out.go:177]   - NO_PROXY=192.168.39.102
	W1209 22:50:14.396713   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 22:50:14.396769   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.397373   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.397558   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.397653   36778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 22:50:14.397697   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	W1209 22:50:14.397767   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 22:50:14.397855   36778 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 22:50:14.397879   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:14.400330   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.400563   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.400725   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.400755   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.400909   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:14.400944   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.400970   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.401106   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.401188   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:14.401275   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:14.401373   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.401443   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa Username:docker}
	I1209 22:50:14.401504   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:14.401614   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa Username:docker}
	I1209 22:50:14.637188   36778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 22:50:14.643200   36778 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 22:50:14.643281   36778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 22:50:14.659398   36778 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 22:50:14.659426   36778 start.go:495] detecting cgroup driver to use...
	I1209 22:50:14.659491   36778 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 22:50:14.676247   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 22:50:14.690114   36778 docker.go:217] disabling cri-docker service (if available) ...
	I1209 22:50:14.690183   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 22:50:14.704181   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 22:50:14.718407   36778 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 22:50:14.836265   36778 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 22:50:14.977440   36778 docker.go:233] disabling docker service ...
	I1209 22:50:14.977523   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 22:50:14.992218   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 22:50:15.006032   36778 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 22:50:15.132938   36778 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 22:50:15.246587   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 22:50:15.260594   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 22:50:15.278081   36778 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 22:50:15.278144   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.288215   36778 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 22:50:15.288291   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.298722   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.309333   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.319278   36778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 22:50:15.329514   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.339686   36778 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.356544   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.367167   36778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 22:50:15.376313   36778 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 22:50:15.376368   36778 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 22:50:15.389607   36778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 22:50:15.399026   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:50:15.510388   36778 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 22:50:15.594142   36778 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 22:50:15.594209   36778 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 22:50:15.598620   36778 start.go:563] Will wait 60s for crictl version
	I1209 22:50:15.598673   36778 ssh_runner.go:195] Run: which crictl
	I1209 22:50:15.602047   36778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 22:50:15.640250   36778 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 22:50:15.640331   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:50:15.667027   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:50:15.696782   36778 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 22:50:15.698100   36778 out.go:177]   - env NO_PROXY=192.168.39.102
	I1209 22:50:15.699295   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetIP
	I1209 22:50:15.701971   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:15.702367   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:15.702391   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:15.702593   36778 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 22:50:15.706559   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:50:15.719413   36778 mustload.go:65] Loading cluster: ha-920193
	I1209 22:50:15.719679   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:50:15.720045   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:50:15.720080   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:50:15.735359   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34745
	I1209 22:50:15.735806   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:50:15.736258   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:50:15.736277   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:50:15.736597   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:50:15.736809   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:50:15.738383   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:50:15.738784   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:50:15.738819   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:50:15.754087   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34025
	I1209 22:50:15.754545   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:50:15.755016   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:50:15.755039   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:50:15.755363   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:50:15.755658   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:50:15.755811   36778 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193 for IP: 192.168.39.43
	I1209 22:50:15.755825   36778 certs.go:194] generating shared ca certs ...
	I1209 22:50:15.755842   36778 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:50:15.756003   36778 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 22:50:15.756062   36778 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 22:50:15.756077   36778 certs.go:256] generating profile certs ...
	I1209 22:50:15.756191   36778 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key
	I1209 22:50:15.756224   36778 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.4dc3270a
	I1209 22:50:15.756244   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.4dc3270a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.43 192.168.39.254]
	I1209 22:50:15.922567   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.4dc3270a ...
	I1209 22:50:15.922607   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.4dc3270a: {Name:mkdd9b3ceabde3bba17fb86e02452182c7c5df88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:50:15.922833   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.4dc3270a ...
	I1209 22:50:15.922852   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.4dc3270a: {Name:mkf2dc6e973669b6272c7472a098255f36b1b21a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:50:15.922964   36778 certs.go:381] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.4dc3270a -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt
	I1209 22:50:15.923108   36778 certs.go:385] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.4dc3270a -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key
	I1209 22:50:15.923250   36778 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key
	I1209 22:50:15.923268   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 22:50:15.923283   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 22:50:15.923300   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 22:50:15.923315   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 22:50:15.923331   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 22:50:15.923346   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 22:50:15.923361   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 22:50:15.923376   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 22:50:15.923447   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 22:50:15.923481   36778 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 22:50:15.923492   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 22:50:15.923526   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 22:50:15.923552   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 22:50:15.923617   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 22:50:15.923669   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:50:15.923701   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /usr/share/ca-certificates/262532.pem
	I1209 22:50:15.923718   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:50:15.923736   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem -> /usr/share/ca-certificates/26253.pem
	I1209 22:50:15.923774   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:50:15.926684   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:50:15.927100   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:50:15.927132   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:50:15.927316   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:50:15.927520   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:50:15.927686   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:50:15.927817   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:50:15.995984   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1209 22:50:16.000689   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1209 22:50:16.010769   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1209 22:50:16.015461   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1209 22:50:16.025382   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1209 22:50:16.029170   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1209 22:50:16.038869   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1209 22:50:16.042928   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1209 22:50:16.052680   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1209 22:50:16.056624   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1209 22:50:16.067154   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1209 22:50:16.071136   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1209 22:50:16.081380   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 22:50:16.105907   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 22:50:16.130202   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 22:50:16.154712   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 22:50:16.178136   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1209 22:50:16.201144   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 22:50:16.223968   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 22:50:16.245967   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 22:50:16.268545   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 22:50:16.290945   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 22:50:16.313125   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 22:50:16.335026   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1209 22:50:16.350896   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1209 22:50:16.366797   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1209 22:50:16.382304   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1209 22:50:16.398151   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1209 22:50:16.413542   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1209 22:50:16.428943   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1209 22:50:16.443894   36778 ssh_runner.go:195] Run: openssl version
	I1209 22:50:16.449370   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 22:50:16.460122   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 22:50:16.464413   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 22:50:16.464474   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 22:50:16.470266   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 22:50:16.480854   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 22:50:16.491307   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 22:50:16.495420   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 22:50:16.495468   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 22:50:16.500658   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 22:50:16.511025   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 22:50:16.521204   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:50:16.525268   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:50:16.525347   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:50:16.530531   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 22:50:16.542187   36778 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 22:50:16.546109   36778 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 22:50:16.546164   36778 kubeadm.go:934] updating node {m02 192.168.39.43 8443 v1.31.2 crio true true} ...
	I1209 22:50:16.546250   36778 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-920193-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 22:50:16.546279   36778 kube-vip.go:115] generating kube-vip config ...
	I1209 22:50:16.546321   36778 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 22:50:16.565259   36778 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 22:50:16.565317   36778 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 22:50:16.565368   36778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 22:50:16.576227   36778 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1209 22:50:16.576286   36778 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1209 22:50:16.587283   36778 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1209 22:50:16.587313   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 22:50:16.587347   36778 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1209 22:50:16.587371   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 22:50:16.587429   36778 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1209 22:50:16.591406   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1209 22:50:16.591443   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1209 22:50:17.403840   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 22:50:17.403917   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 22:50:17.408515   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1209 22:50:17.408550   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1209 22:50:17.508668   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:50:17.539619   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 22:50:17.539709   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 22:50:17.547698   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1209 22:50:17.547746   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1209 22:50:17.976645   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1209 22:50:17.986050   36778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 22:50:18.001981   36778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 22:50:18.017737   36778 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 22:50:18.034382   36778 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 22:50:18.038243   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:50:18.051238   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:50:18.168167   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:50:18.185010   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:50:18.185466   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:50:18.185511   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:50:18.200608   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36771
	I1209 22:50:18.201083   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:50:18.201577   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:50:18.201599   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:50:18.201983   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:50:18.202177   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:50:18.202335   36778 start.go:317] joinCluster: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:50:18.202454   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1209 22:50:18.202478   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:50:18.205838   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:50:18.206272   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:50:18.206305   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:50:18.206454   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:50:18.206651   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:50:18.206809   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:50:18.206953   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:50:18.346102   36778 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:50:18.346151   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0e0mum.qzhjvrjwvxlgpdn7 --discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-920193-m02 --control-plane --apiserver-advertise-address=192.168.39.43 --apiserver-bind-port=8443"
	I1209 22:50:38.220755   36778 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0e0mum.qzhjvrjwvxlgpdn7 --discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-920193-m02 --control-plane --apiserver-advertise-address=192.168.39.43 --apiserver-bind-port=8443": (19.874577958s)
	I1209 22:50:38.220795   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1209 22:50:38.605694   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-920193-m02 minikube.k8s.io/updated_at=2024_12_09T22_50_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=ha-920193 minikube.k8s.io/primary=false
	I1209 22:50:38.732046   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-920193-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1209 22:50:38.853470   36778 start.go:319] duration metric: took 20.651129665s to joinCluster
	I1209 22:50:38.853557   36778 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:50:38.853987   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:50:38.855541   36778 out.go:177] * Verifying Kubernetes components...
	I1209 22:50:38.856758   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:50:39.134622   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:50:39.155772   36778 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:50:39.156095   36778 kapi.go:59] client config for ha-920193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key", CAFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c9a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1209 22:50:39.156174   36778 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I1209 22:50:39.156458   36778 node_ready.go:35] waiting up to 6m0s for node "ha-920193-m02" to be "Ready" ...
	I1209 22:50:39.156557   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:39.156569   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:39.156580   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:39.156589   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:39.166040   36778 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1209 22:50:39.656808   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:39.656835   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:39.656848   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:39.656853   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:39.660666   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:40.157282   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:40.157306   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:40.157314   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:40.157319   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:40.171594   36778 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1209 22:50:40.656953   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:40.656975   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:40.656984   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:40.656988   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:40.660321   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:41.157246   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:41.157267   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:41.157275   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:41.157278   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:41.160595   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:41.161242   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:41.657713   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:41.657743   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:41.657754   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:41.657760   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:41.661036   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:42.157055   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:42.157081   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:42.157092   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:42.157098   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:42.160406   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:42.657502   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:42.657525   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:42.657535   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:42.657543   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:42.660437   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:43.157580   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:43.157601   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:43.157610   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:43.157614   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:43.159874   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:43.657603   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:43.657624   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:43.657631   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:43.657635   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:43.661418   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:43.662212   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:44.157154   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:44.157180   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:44.157193   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:44.157199   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:44.160641   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:44.657594   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:44.657632   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:44.657639   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:44.657643   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:44.660444   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:45.156643   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:45.156665   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:45.156673   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:45.156678   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:45.159591   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:45.656824   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:45.656848   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:45.656860   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:45.656865   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:45.660567   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:46.157410   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:46.157431   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:46.157440   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:46.157444   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:46.164952   36778 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1209 22:50:46.165425   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:46.656667   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:46.656688   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:46.656695   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:46.656701   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:46.660336   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:47.157296   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:47.157321   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:47.157329   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:47.157332   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:47.160332   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:47.657301   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:47.657323   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:47.657331   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:47.657336   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:47.660325   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:48.157563   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:48.157584   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:48.157594   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:48.157608   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:48.160951   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:48.657246   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:48.657273   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:48.657284   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:48.657292   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:48.660393   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:48.661028   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:49.157387   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:49.157407   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:49.157413   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:49.157418   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:49.160553   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:49.656857   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:49.656876   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:49.656884   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:49.656887   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:49.660150   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:50.157105   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:50.157127   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:50.157135   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:50.157138   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:50.160132   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:50.657157   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:50.657175   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:50.657183   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:50.657186   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:50.660060   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:51.156681   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:51.156703   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:51.156710   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:51.156715   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:51.160061   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:51.160485   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:51.656792   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:51.656814   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:51.656822   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:51.656828   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:51.660462   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:52.157422   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:52.157444   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:52.157452   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:52.157456   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:52.160620   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:52.657587   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:52.657612   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:52.657623   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:52.657635   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:52.661805   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:50:53.156794   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:53.156813   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:53.156820   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:53.156824   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:53.159611   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:53.657422   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:53.657443   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:53.657451   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:53.657456   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:53.660973   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:53.661490   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:54.156741   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:54.156775   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:54.156788   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:54.156793   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:54.159842   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:54.657520   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:54.657542   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:54.657551   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:54.657556   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:54.661360   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:55.157356   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:55.157381   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:55.157389   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:55.157398   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:55.160974   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:55.657357   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:55.657380   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:55.657386   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:55.657389   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:55.661109   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:55.661633   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:56.156805   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:56.156829   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:56.156842   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:56.156848   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:56.159652   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:56.657355   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:56.657382   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:56.657391   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:56.657396   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:56.660284   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.156798   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:57.156817   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.156825   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.156828   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.159439   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.160184   36778 node_ready.go:49] node "ha-920193-m02" has status "Ready":"True"
	I1209 22:50:57.160211   36778 node_ready.go:38] duration metric: took 18.003728094s for node "ha-920193-m02" to be "Ready" ...
	I1209 22:50:57.160219   36778 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:50:57.160281   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:50:57.160291   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.160297   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.160301   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.163826   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.171109   36778 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.171198   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9792g
	I1209 22:50:57.171207   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.171215   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.171218   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.175686   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:50:57.176418   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.176433   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.176440   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.176445   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.178918   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.179482   36778 pod_ready.go:93] pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.179502   36778 pod_ready.go:82] duration metric: took 8.366716ms for pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.179511   36778 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.179579   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pftgv
	I1209 22:50:57.179590   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.179601   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.179607   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.181884   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.182566   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.182584   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.182593   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.182603   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.184849   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.185336   36778 pod_ready.go:93] pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.185356   36778 pod_ready.go:82] duration metric: took 5.835616ms for pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.185369   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.185431   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193
	I1209 22:50:57.185440   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.185446   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.185452   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.187419   36778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1209 22:50:57.188120   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.188138   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.188148   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.188155   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.190287   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.190719   36778 pod_ready.go:93] pod "etcd-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.190736   36778 pod_ready.go:82] duration metric: took 5.359942ms for pod "etcd-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.190748   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.190809   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193-m02
	I1209 22:50:57.190819   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.190828   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.190835   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.192882   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.193624   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:57.193638   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.193645   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.193648   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.195725   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.196308   36778 pod_ready.go:93] pod "etcd-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.196330   36778 pod_ready.go:82] duration metric: took 5.570375ms for pod "etcd-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.196346   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.357701   36778 request.go:632] Waited for 161.300261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193
	I1209 22:50:57.357803   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193
	I1209 22:50:57.357815   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.357826   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.357831   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.361143   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.557163   36778 request.go:632] Waited for 195.392304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.557255   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.557275   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.557286   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.557299   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.560687   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.561270   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.561292   36778 pod_ready.go:82] duration metric: took 364.939583ms for pod "kube-apiserver-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.561303   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.757400   36778 request.go:632] Waited for 196.034135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m02
	I1209 22:50:57.757501   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m02
	I1209 22:50:57.757514   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.757525   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.757533   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.761021   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.957152   36778 request.go:632] Waited for 195.395123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:57.957252   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:57.957262   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.957269   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.957273   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.961000   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.961523   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.961541   36778 pod_ready.go:82] duration metric: took 400.228352ms for pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.961551   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.157823   36778 request.go:632] Waited for 196.207607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193
	I1209 22:50:58.157936   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193
	I1209 22:50:58.157948   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.157956   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.157960   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.161121   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:58.357017   36778 request.go:632] Waited for 194.771557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:58.357073   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:58.357091   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.357099   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.357103   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.360088   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:58.360518   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:58.360541   36778 pod_ready.go:82] duration metric: took 398.983882ms for pod "kube-controller-manager-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.360551   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.557689   36778 request.go:632] Waited for 197.047701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m02
	I1209 22:50:58.557763   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m02
	I1209 22:50:58.557772   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.557779   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.557783   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.561314   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:58.757454   36778 request.go:632] Waited for 195.361025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:58.757514   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:58.757519   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.757531   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.757540   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.760353   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:58.760931   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:58.760952   36778 pod_ready.go:82] duration metric: took 400.394843ms for pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.760961   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lntbt" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.956933   36778 request.go:632] Waited for 195.877051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lntbt
	I1209 22:50:58.956989   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lntbt
	I1209 22:50:58.956993   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.957001   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.957005   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.960313   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:59.157481   36778 request.go:632] Waited for 196.370711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:59.157545   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:59.157551   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.157558   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.157562   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.160790   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:59.161308   36778 pod_ready.go:93] pod "kube-proxy-lntbt" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:59.161325   36778 pod_ready.go:82] duration metric: took 400.358082ms for pod "kube-proxy-lntbt" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.161334   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r8nhm" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.357539   36778 request.go:632] Waited for 196.144123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r8nhm
	I1209 22:50:59.357600   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r8nhm
	I1209 22:50:59.357605   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.357614   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.357619   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.360709   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:59.557525   36778 request.go:632] Waited for 196.134266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:59.557582   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:59.557587   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.557594   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.557599   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.561037   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:59.561650   36778 pod_ready.go:93] pod "kube-proxy-r8nhm" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:59.561671   36778 pod_ready.go:82] duration metric: took 400.330133ms for pod "kube-proxy-r8nhm" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.561686   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.757716   36778 request.go:632] Waited for 195.957167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193
	I1209 22:50:59.757794   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193
	I1209 22:50:59.757799   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.757806   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.757810   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.760629   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:59.957516   36778 request.go:632] Waited for 196.356707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:59.957571   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:59.957576   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.957583   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.957589   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.960569   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:59.961033   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:59.961052   36778 pod_ready.go:82] duration metric: took 399.355328ms for pod "kube-scheduler-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.961065   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:51:00.157215   36778 request.go:632] Waited for 196.068129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m02
	I1209 22:51:00.157354   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m02
	I1209 22:51:00.157371   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.157385   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.157393   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.160825   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:00.357607   36778 request.go:632] Waited for 196.256861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:51:00.357660   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:51:00.357665   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.357673   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.357676   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.360928   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:00.361370   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:51:00.361388   36778 pod_ready.go:82] duration metric: took 400.315143ms for pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:51:00.361398   36778 pod_ready.go:39] duration metric: took 3.201168669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:51:00.361416   36778 api_server.go:52] waiting for apiserver process to appear ...
	I1209 22:51:00.361461   36778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 22:51:00.375321   36778 api_server.go:72] duration metric: took 21.521720453s to wait for apiserver process to appear ...
	I1209 22:51:00.375346   36778 api_server.go:88] waiting for apiserver healthz status ...
	I1209 22:51:00.375364   36778 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I1209 22:51:00.379577   36778 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I1209 22:51:00.379640   36778 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I1209 22:51:00.379648   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.379656   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.379662   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.380589   36778 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1209 22:51:00.380716   36778 api_server.go:141] control plane version: v1.31.2
	I1209 22:51:00.380756   36778 api_server.go:131] duration metric: took 5.402425ms to wait for apiserver health ...
	I1209 22:51:00.380766   36778 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 22:51:00.557205   36778 request.go:632] Waited for 176.35448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:51:00.557271   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:51:00.557277   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.557284   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.557289   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.561926   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:00.568583   36778 system_pods.go:59] 17 kube-system pods found
	I1209 22:51:00.568619   36778 system_pods.go:61] "coredns-7c65d6cfc9-9792g" [61c6326d-4d13-49b7-9fae-c4f8906c3c7e] Running
	I1209 22:51:00.568631   36778 system_pods.go:61] "coredns-7c65d6cfc9-pftgv" [eb45cffe-b666-435b-ada3-605735ee43c1] Running
	I1209 22:51:00.568637   36778 system_pods.go:61] "etcd-ha-920193" [70c28c05-bf86-4a56-9863-0e836de39230] Running
	I1209 22:51:00.568643   36778 system_pods.go:61] "etcd-ha-920193-m02" [b1746065-266b-487c-8aa5-629d5b421a3b] Running
	I1209 22:51:00.568648   36778 system_pods.go:61] "kindnet-7bbbc" [76f14f4c-3759-45b3-8161-78d85c44b5a6] Running
	I1209 22:51:00.568652   36778 system_pods.go:61] "kindnet-rcctv" [e9580e78-cfcb-45b6-9e90-f37ce3881c6b] Running
	I1209 22:51:00.568657   36778 system_pods.go:61] "kube-apiserver-ha-920193" [9c15e33e-f7c1-4b0b-84de-718043e0ea1b] Running
	I1209 22:51:00.568662   36778 system_pods.go:61] "kube-apiserver-ha-920193-m02" [d05fff94-7b3f-4127-8164-3c1a838bb71c] Running
	I1209 22:51:00.568672   36778 system_pods.go:61] "kube-controller-manager-ha-920193" [4a411c41-cba8-4cb3-beec-fcd3f9a8ade1] Running
	I1209 22:51:00.568677   36778 system_pods.go:61] "kube-controller-manager-ha-920193-m02" [57181858-4c99-4afa-b94c-4ba320fc293d] Running
	I1209 22:51:00.568681   36778 system_pods.go:61] "kube-proxy-lntbt" [8eb5538d-7ac0-4949-b7f9-29d5530f4521] Running
	I1209 22:51:00.568687   36778 system_pods.go:61] "kube-proxy-r8nhm" [355570de-1c30-4aa2-a56c-a06639cc339c] Running
	I1209 22:51:00.568692   36778 system_pods.go:61] "kube-scheduler-ha-920193" [60b5be1e-f9fe-4c30-b435-7d321c484bc4] Running
	I1209 22:51:00.568699   36778 system_pods.go:61] "kube-scheduler-ha-920193-m02" [a35e79b6-6cf2-4669-95fb-c1f358c811d2] Running
	I1209 22:51:00.568703   36778 system_pods.go:61] "kube-vip-ha-920193" [4f238d3a-6219-4e4d-89a8-22f73def8440] Running
	I1209 22:51:00.568709   36778 system_pods.go:61] "kube-vip-ha-920193-m02" [0fe8360d-b418-47dd-bcce-a8d5a6c4c29d] Running
	I1209 22:51:00.568713   36778 system_pods.go:61] "storage-provisioner" [cb31984d-7602-406c-8c77-ce8571cdaa52] Running
	I1209 22:51:00.568720   36778 system_pods.go:74] duration metric: took 187.947853ms to wait for pod list to return data ...
	I1209 22:51:00.568736   36778 default_sa.go:34] waiting for default service account to be created ...
	I1209 22:51:00.757459   36778 request.go:632] Waited for 188.649373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1209 22:51:00.757529   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1209 22:51:00.757535   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.757542   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.757549   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.761133   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:00.761462   36778 default_sa.go:45] found service account: "default"
	I1209 22:51:00.761484   36778 default_sa.go:55] duration metric: took 192.741843ms for default service account to be created ...
	I1209 22:51:00.761493   36778 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 22:51:00.957815   36778 request.go:632] Waited for 196.251364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:51:00.957869   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:51:00.957874   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.957881   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.957886   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.962434   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:00.967784   36778 system_pods.go:86] 17 kube-system pods found
	I1209 22:51:00.967807   36778 system_pods.go:89] "coredns-7c65d6cfc9-9792g" [61c6326d-4d13-49b7-9fae-c4f8906c3c7e] Running
	I1209 22:51:00.967813   36778 system_pods.go:89] "coredns-7c65d6cfc9-pftgv" [eb45cffe-b666-435b-ada3-605735ee43c1] Running
	I1209 22:51:00.967818   36778 system_pods.go:89] "etcd-ha-920193" [70c28c05-bf86-4a56-9863-0e836de39230] Running
	I1209 22:51:00.967822   36778 system_pods.go:89] "etcd-ha-920193-m02" [b1746065-266b-487c-8aa5-629d5b421a3b] Running
	I1209 22:51:00.967825   36778 system_pods.go:89] "kindnet-7bbbc" [76f14f4c-3759-45b3-8161-78d85c44b5a6] Running
	I1209 22:51:00.967829   36778 system_pods.go:89] "kindnet-rcctv" [e9580e78-cfcb-45b6-9e90-f37ce3881c6b] Running
	I1209 22:51:00.967832   36778 system_pods.go:89] "kube-apiserver-ha-920193" [9c15e33e-f7c1-4b0b-84de-718043e0ea1b] Running
	I1209 22:51:00.967836   36778 system_pods.go:89] "kube-apiserver-ha-920193-m02" [d05fff94-7b3f-4127-8164-3c1a838bb71c] Running
	I1209 22:51:00.967839   36778 system_pods.go:89] "kube-controller-manager-ha-920193" [4a411c41-cba8-4cb3-beec-fcd3f9a8ade1] Running
	I1209 22:51:00.967843   36778 system_pods.go:89] "kube-controller-manager-ha-920193-m02" [57181858-4c99-4afa-b94c-4ba320fc293d] Running
	I1209 22:51:00.967846   36778 system_pods.go:89] "kube-proxy-lntbt" [8eb5538d-7ac0-4949-b7f9-29d5530f4521] Running
	I1209 22:51:00.967849   36778 system_pods.go:89] "kube-proxy-r8nhm" [355570de-1c30-4aa2-a56c-a06639cc339c] Running
	I1209 22:51:00.967853   36778 system_pods.go:89] "kube-scheduler-ha-920193" [60b5be1e-f9fe-4c30-b435-7d321c484bc4] Running
	I1209 22:51:00.967856   36778 system_pods.go:89] "kube-scheduler-ha-920193-m02" [a35e79b6-6cf2-4669-95fb-c1f358c811d2] Running
	I1209 22:51:00.967859   36778 system_pods.go:89] "kube-vip-ha-920193" [4f238d3a-6219-4e4d-89a8-22f73def8440] Running
	I1209 22:51:00.967862   36778 system_pods.go:89] "kube-vip-ha-920193-m02" [0fe8360d-b418-47dd-bcce-a8d5a6c4c29d] Running
	I1209 22:51:00.967865   36778 system_pods.go:89] "storage-provisioner" [cb31984d-7602-406c-8c77-ce8571cdaa52] Running
	I1209 22:51:00.967872   36778 system_pods.go:126] duration metric: took 206.369849ms to wait for k8s-apps to be running ...
	I1209 22:51:00.967881   36778 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 22:51:00.967920   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:51:00.982635   36778 system_svc.go:56] duration metric: took 14.746001ms WaitForService to wait for kubelet
	I1209 22:51:00.982658   36778 kubeadm.go:582] duration metric: took 22.129061399s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 22:51:00.982676   36778 node_conditions.go:102] verifying NodePressure condition ...
	I1209 22:51:01.157065   36778 request.go:632] Waited for 174.324712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I1209 22:51:01.157132   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I1209 22:51:01.157137   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:01.157146   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:01.157150   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:01.161631   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:01.162406   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:51:01.162427   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:51:01.162443   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:51:01.162449   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:51:01.162454   36778 node_conditions.go:105] duration metric: took 179.774021ms to run NodePressure ...
	I1209 22:51:01.162470   36778 start.go:241] waiting for startup goroutines ...
	I1209 22:51:01.162500   36778 start.go:255] writing updated cluster config ...
	I1209 22:51:01.164529   36778 out.go:201] 
	I1209 22:51:01.165967   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:51:01.166048   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:51:01.167621   36778 out.go:177] * Starting "ha-920193-m03" control-plane node in "ha-920193" cluster
	I1209 22:51:01.168868   36778 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:51:01.168885   36778 cache.go:56] Caching tarball of preloaded images
	I1209 22:51:01.168992   36778 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 22:51:01.169010   36778 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 22:51:01.169110   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:51:01.169269   36778 start.go:360] acquireMachinesLock for ha-920193-m03: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 22:51:01.169312   36778 start.go:364] duration metric: took 23.987µs to acquireMachinesLock for "ha-920193-m03"
	I1209 22:51:01.169336   36778 start.go:93] Provisioning new machine with config: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:51:01.169433   36778 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1209 22:51:01.171416   36778 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 22:51:01.171522   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:51:01.171583   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:51:01.186366   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42027
	I1209 22:51:01.186874   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:51:01.187404   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:51:01.187428   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:51:01.187781   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:51:01.187979   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetMachineName
	I1209 22:51:01.188140   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:01.188306   36778 start.go:159] libmachine.API.Create for "ha-920193" (driver="kvm2")
	I1209 22:51:01.188339   36778 client.go:168] LocalClient.Create starting
	I1209 22:51:01.188376   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem
	I1209 22:51:01.188415   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:51:01.188430   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:51:01.188479   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem
	I1209 22:51:01.188497   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:51:01.188505   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:51:01.188519   36778 main.go:141] libmachine: Running pre-create checks...
	I1209 22:51:01.188524   36778 main.go:141] libmachine: (ha-920193-m03) Calling .PreCreateCheck
	I1209 22:51:01.188706   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetConfigRaw
	I1209 22:51:01.189120   36778 main.go:141] libmachine: Creating machine...
	I1209 22:51:01.189133   36778 main.go:141] libmachine: (ha-920193-m03) Calling .Create
	I1209 22:51:01.189263   36778 main.go:141] libmachine: (ha-920193-m03) Creating KVM machine...
	I1209 22:51:01.190619   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found existing default KVM network
	I1209 22:51:01.190780   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found existing private KVM network mk-ha-920193
	I1209 22:51:01.190893   36778 main.go:141] libmachine: (ha-920193-m03) Setting up store path in /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03 ...
	I1209 22:51:01.190907   36778 main.go:141] libmachine: (ha-920193-m03) Building disk image from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 22:51:01.191000   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:01.190898   37541 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:51:01.191087   36778 main.go:141] libmachine: (ha-920193-m03) Downloading /home/jenkins/minikube-integration/19888-18950/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 22:51:01.428399   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:01.428270   37541 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa...
	I1209 22:51:01.739906   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:01.739799   37541 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/ha-920193-m03.rawdisk...
	I1209 22:51:01.739933   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Writing magic tar header
	I1209 22:51:01.739943   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Writing SSH key tar header
	I1209 22:51:01.739951   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:01.739915   37541 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03 ...
	I1209 22:51:01.740035   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03
	I1209 22:51:01.740064   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03 (perms=drwx------)
	I1209 22:51:01.740080   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines
	I1209 22:51:01.740097   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:51:01.740107   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950
	I1209 22:51:01.740114   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines (perms=drwxr-xr-x)
	I1209 22:51:01.740127   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube (perms=drwxr-xr-x)
	I1209 22:51:01.740140   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950 (perms=drwxrwxr-x)
	I1209 22:51:01.740154   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 22:51:01.740167   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 22:51:01.740178   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins
	I1209 22:51:01.740189   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home
	I1209 22:51:01.740219   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 22:51:01.740244   36778 main.go:141] libmachine: (ha-920193-m03) Creating domain...
	I1209 22:51:01.740252   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Skipping /home - not owner
	I1209 22:51:01.741067   36778 main.go:141] libmachine: (ha-920193-m03) define libvirt domain using xml: 
	I1209 22:51:01.741086   36778 main.go:141] libmachine: (ha-920193-m03) <domain type='kvm'>
	I1209 22:51:01.741093   36778 main.go:141] libmachine: (ha-920193-m03)   <name>ha-920193-m03</name>
	I1209 22:51:01.741098   36778 main.go:141] libmachine: (ha-920193-m03)   <memory unit='MiB'>2200</memory>
	I1209 22:51:01.741103   36778 main.go:141] libmachine: (ha-920193-m03)   <vcpu>2</vcpu>
	I1209 22:51:01.741110   36778 main.go:141] libmachine: (ha-920193-m03)   <features>
	I1209 22:51:01.741115   36778 main.go:141] libmachine: (ha-920193-m03)     <acpi/>
	I1209 22:51:01.741119   36778 main.go:141] libmachine: (ha-920193-m03)     <apic/>
	I1209 22:51:01.741124   36778 main.go:141] libmachine: (ha-920193-m03)     <pae/>
	I1209 22:51:01.741128   36778 main.go:141] libmachine: (ha-920193-m03)     
	I1209 22:51:01.741133   36778 main.go:141] libmachine: (ha-920193-m03)   </features>
	I1209 22:51:01.741147   36778 main.go:141] libmachine: (ha-920193-m03)   <cpu mode='host-passthrough'>
	I1209 22:51:01.741152   36778 main.go:141] libmachine: (ha-920193-m03)   
	I1209 22:51:01.741157   36778 main.go:141] libmachine: (ha-920193-m03)   </cpu>
	I1209 22:51:01.741162   36778 main.go:141] libmachine: (ha-920193-m03)   <os>
	I1209 22:51:01.741166   36778 main.go:141] libmachine: (ha-920193-m03)     <type>hvm</type>
	I1209 22:51:01.741171   36778 main.go:141] libmachine: (ha-920193-m03)     <boot dev='cdrom'/>
	I1209 22:51:01.741176   36778 main.go:141] libmachine: (ha-920193-m03)     <boot dev='hd'/>
	I1209 22:51:01.741184   36778 main.go:141] libmachine: (ha-920193-m03)     <bootmenu enable='no'/>
	I1209 22:51:01.741188   36778 main.go:141] libmachine: (ha-920193-m03)   </os>
	I1209 22:51:01.741225   36778 main.go:141] libmachine: (ha-920193-m03)   <devices>
	I1209 22:51:01.741245   36778 main.go:141] libmachine: (ha-920193-m03)     <disk type='file' device='cdrom'>
	I1209 22:51:01.741288   36778 main.go:141] libmachine: (ha-920193-m03)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/boot2docker.iso'/>
	I1209 22:51:01.741325   36778 main.go:141] libmachine: (ha-920193-m03)       <target dev='hdc' bus='scsi'/>
	I1209 22:51:01.741339   36778 main.go:141] libmachine: (ha-920193-m03)       <readonly/>
	I1209 22:51:01.741350   36778 main.go:141] libmachine: (ha-920193-m03)     </disk>
	I1209 22:51:01.741361   36778 main.go:141] libmachine: (ha-920193-m03)     <disk type='file' device='disk'>
	I1209 22:51:01.741373   36778 main.go:141] libmachine: (ha-920193-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 22:51:01.741386   36778 main.go:141] libmachine: (ha-920193-m03)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/ha-920193-m03.rawdisk'/>
	I1209 22:51:01.741397   36778 main.go:141] libmachine: (ha-920193-m03)       <target dev='hda' bus='virtio'/>
	I1209 22:51:01.741408   36778 main.go:141] libmachine: (ha-920193-m03)     </disk>
	I1209 22:51:01.741418   36778 main.go:141] libmachine: (ha-920193-m03)     <interface type='network'>
	I1209 22:51:01.741429   36778 main.go:141] libmachine: (ha-920193-m03)       <source network='mk-ha-920193'/>
	I1209 22:51:01.741437   36778 main.go:141] libmachine: (ha-920193-m03)       <model type='virtio'/>
	I1209 22:51:01.741447   36778 main.go:141] libmachine: (ha-920193-m03)     </interface>
	I1209 22:51:01.741456   36778 main.go:141] libmachine: (ha-920193-m03)     <interface type='network'>
	I1209 22:51:01.741472   36778 main.go:141] libmachine: (ha-920193-m03)       <source network='default'/>
	I1209 22:51:01.741483   36778 main.go:141] libmachine: (ha-920193-m03)       <model type='virtio'/>
	I1209 22:51:01.741496   36778 main.go:141] libmachine: (ha-920193-m03)     </interface>
	I1209 22:51:01.741507   36778 main.go:141] libmachine: (ha-920193-m03)     <serial type='pty'>
	I1209 22:51:01.741516   36778 main.go:141] libmachine: (ha-920193-m03)       <target port='0'/>
	I1209 22:51:01.741525   36778 main.go:141] libmachine: (ha-920193-m03)     </serial>
	I1209 22:51:01.741534   36778 main.go:141] libmachine: (ha-920193-m03)     <console type='pty'>
	I1209 22:51:01.741544   36778 main.go:141] libmachine: (ha-920193-m03)       <target type='serial' port='0'/>
	I1209 22:51:01.741552   36778 main.go:141] libmachine: (ha-920193-m03)     </console>
	I1209 22:51:01.741566   36778 main.go:141] libmachine: (ha-920193-m03)     <rng model='virtio'>
	I1209 22:51:01.741580   36778 main.go:141] libmachine: (ha-920193-m03)       <backend model='random'>/dev/random</backend>
	I1209 22:51:01.741590   36778 main.go:141] libmachine: (ha-920193-m03)     </rng>
	I1209 22:51:01.741597   36778 main.go:141] libmachine: (ha-920193-m03)     
	I1209 22:51:01.741606   36778 main.go:141] libmachine: (ha-920193-m03)     
	I1209 22:51:01.741616   36778 main.go:141] libmachine: (ha-920193-m03)   </devices>
	I1209 22:51:01.741623   36778 main.go:141] libmachine: (ha-920193-m03) </domain>
	I1209 22:51:01.741635   36778 main.go:141] libmachine: (ha-920193-m03) 
	I1209 22:51:01.749628   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:ca:84:fc in network default
	I1209 22:51:01.750354   36778 main.go:141] libmachine: (ha-920193-m03) Ensuring networks are active...
	I1209 22:51:01.750395   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:01.751100   36778 main.go:141] libmachine: (ha-920193-m03) Ensuring network default is active
	I1209 22:51:01.751465   36778 main.go:141] libmachine: (ha-920193-m03) Ensuring network mk-ha-920193 is active
	I1209 22:51:01.751930   36778 main.go:141] libmachine: (ha-920193-m03) Getting domain xml...
	I1209 22:51:01.752802   36778 main.go:141] libmachine: (ha-920193-m03) Creating domain...
	I1209 22:51:03.003454   36778 main.go:141] libmachine: (ha-920193-m03) Waiting to get IP...
	I1209 22:51:03.004238   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:03.004647   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:03.004670   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:03.004626   37541 retry.go:31] will retry after 297.46379ms: waiting for machine to come up
	I1209 22:51:03.304151   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:03.304627   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:03.304651   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:03.304586   37541 retry.go:31] will retry after 341.743592ms: waiting for machine to come up
	I1209 22:51:03.648185   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:03.648648   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:03.648681   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:03.648610   37541 retry.go:31] will retry after 348.703343ms: waiting for machine to come up
	I1209 22:51:03.999220   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:03.999761   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:03.999783   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:03.999722   37541 retry.go:31] will retry after 471.208269ms: waiting for machine to come up
	I1209 22:51:04.473118   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:04.473644   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:04.473698   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:04.473622   37541 retry.go:31] will retry after 567.031016ms: waiting for machine to come up
	I1209 22:51:05.042388   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:05.042845   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:05.042890   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:05.042828   37541 retry.go:31] will retry after 635.422002ms: waiting for machine to come up
	I1209 22:51:05.679729   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:05.680179   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:05.680197   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:05.680151   37541 retry.go:31] will retry after 1.009913666s: waiting for machine to come up
	I1209 22:51:06.691434   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:06.692093   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:06.692115   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:06.692049   37541 retry.go:31] will retry after 1.22911274s: waiting for machine to come up
	I1209 22:51:07.923301   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:07.923871   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:07.923895   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:07.923821   37541 retry.go:31] will retry after 1.262587003s: waiting for machine to come up
	I1209 22:51:09.187598   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:09.188051   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:09.188081   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:09.188005   37541 retry.go:31] will retry after 2.033467764s: waiting for machine to come up
	I1209 22:51:11.223284   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:11.223845   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:11.223872   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:11.223795   37541 retry.go:31] will retry after 2.889234368s: waiting for machine to come up
	I1209 22:51:14.116824   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:14.117240   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:14.117262   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:14.117201   37541 retry.go:31] will retry after 2.84022101s: waiting for machine to come up
	I1209 22:51:16.958771   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:16.959194   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:16.959219   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:16.959151   37541 retry.go:31] will retry after 3.882632517s: waiting for machine to come up
	I1209 22:51:20.846163   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:20.846626   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:20.846647   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:20.846582   37541 retry.go:31] will retry after 4.879681656s: waiting for machine to come up
	I1209 22:51:25.727341   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.727988   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has current primary IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.728010   36778 main.go:141] libmachine: (ha-920193-m03) Found IP for machine: 192.168.39.45
	I1209 22:51:25.728024   36778 main.go:141] libmachine: (ha-920193-m03) Reserving static IP address...
	I1209 22:51:25.728426   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find host DHCP lease matching {name: "ha-920193-m03", mac: "52:54:00:50:0a:7f", ip: "192.168.39.45"} in network mk-ha-920193
	I1209 22:51:25.801758   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Getting to WaitForSSH function...
	I1209 22:51:25.801788   36778 main.go:141] libmachine: (ha-920193-m03) Reserved static IP address: 192.168.39.45
	I1209 22:51:25.801801   36778 main.go:141] libmachine: (ha-920193-m03) Waiting for SSH to be available...
	I1209 22:51:25.804862   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.805259   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:minikube Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:25.805292   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.805437   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Using SSH client type: external
	I1209 22:51:25.805466   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa (-rw-------)
	I1209 22:51:25.805497   36778 main.go:141] libmachine: (ha-920193-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.45 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 22:51:25.805521   36778 main.go:141] libmachine: (ha-920193-m03) DBG | About to run SSH command:
	I1209 22:51:25.805536   36778 main.go:141] libmachine: (ha-920193-m03) DBG | exit 0
	I1209 22:51:25.927825   36778 main.go:141] libmachine: (ha-920193-m03) DBG | SSH cmd err, output: <nil>: 
	I1209 22:51:25.928111   36778 main.go:141] libmachine: (ha-920193-m03) KVM machine creation complete!
	I1209 22:51:25.928439   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetConfigRaw
	I1209 22:51:25.928948   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:25.929144   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:25.929273   36778 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 22:51:25.929318   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetState
	I1209 22:51:25.930677   36778 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 22:51:25.930689   36778 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 22:51:25.930694   36778 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 22:51:25.930702   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:25.933545   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.933940   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:25.933962   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.934133   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:25.934287   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:25.934450   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:25.934592   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:25.934747   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:25.934964   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:25.934979   36778 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 22:51:26.038809   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:51:26.038831   36778 main.go:141] libmachine: Detecting the provisioner...
	I1209 22:51:26.038839   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.041686   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.041976   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.042008   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.042164   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.042336   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.042474   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.042609   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.042802   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:26.042955   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:26.042966   36778 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 22:51:26.148122   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 22:51:26.148211   36778 main.go:141] libmachine: found compatible host: buildroot
	I1209 22:51:26.148225   36778 main.go:141] libmachine: Provisioning with buildroot...
	I1209 22:51:26.148236   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetMachineName
	I1209 22:51:26.148529   36778 buildroot.go:166] provisioning hostname "ha-920193-m03"
	I1209 22:51:26.148558   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetMachineName
	I1209 22:51:26.148758   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.151543   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.151998   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.152027   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.152153   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.152318   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.152485   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.152628   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.152792   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:26.152967   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:26.152984   36778 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-920193-m03 && echo "ha-920193-m03" | sudo tee /etc/hostname
	I1209 22:51:26.273873   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-920193-m03
	
	I1209 22:51:26.273909   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.276949   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.277338   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.277363   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.277530   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.277710   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.277857   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.278009   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.278182   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:26.278378   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:26.278395   36778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-920193-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-920193-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-920193-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 22:51:26.396863   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:51:26.396892   36778 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 22:51:26.396911   36778 buildroot.go:174] setting up certificates
	I1209 22:51:26.396941   36778 provision.go:84] configureAuth start
	I1209 22:51:26.396969   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetMachineName
	I1209 22:51:26.397249   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetIP
	I1209 22:51:26.400060   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.400552   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.400587   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.400787   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.403205   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.403621   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.403649   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.403809   36778 provision.go:143] copyHostCerts
	I1209 22:51:26.403843   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:51:26.403883   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 22:51:26.403895   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:51:26.403963   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 22:51:26.404040   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:51:26.404057   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 22:51:26.404065   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:51:26.404088   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 22:51:26.404134   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:51:26.404151   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 22:51:26.404158   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:51:26.404179   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 22:51:26.404226   36778 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.ha-920193-m03 san=[127.0.0.1 192.168.39.45 ha-920193-m03 localhost minikube]
	I1209 22:51:26.742826   36778 provision.go:177] copyRemoteCerts
	I1209 22:51:26.742899   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 22:51:26.742929   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.745666   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.745993   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.746025   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.746168   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.746370   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.746525   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.746673   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa Username:docker}
	I1209 22:51:26.830893   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 22:51:26.830957   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 22:51:26.856889   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 22:51:26.856964   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 22:51:26.883482   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 22:51:26.883555   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 22:51:26.908478   36778 provision.go:87] duration metric: took 511.5225ms to configureAuth
	I1209 22:51:26.908504   36778 buildroot.go:189] setting minikube options for container-runtime
	I1209 22:51:26.908720   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:51:26.908806   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.911525   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.911882   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.911910   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.912106   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.912305   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.912470   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.912617   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.912830   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:26.913029   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:26.913046   36778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 22:51:27.123000   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 22:51:27.123030   36778 main.go:141] libmachine: Checking connection to Docker...
	I1209 22:51:27.123040   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetURL
	I1209 22:51:27.124367   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Using libvirt version 6000000
	I1209 22:51:27.126749   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.127125   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.127158   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.127291   36778 main.go:141] libmachine: Docker is up and running!
	I1209 22:51:27.127312   36778 main.go:141] libmachine: Reticulating splines...
	I1209 22:51:27.127327   36778 client.go:171] duration metric: took 25.938971166s to LocalClient.Create
	I1209 22:51:27.127361   36778 start.go:167] duration metric: took 25.939054874s to libmachine.API.Create "ha-920193"
	I1209 22:51:27.127375   36778 start.go:293] postStartSetup for "ha-920193-m03" (driver="kvm2")
	I1209 22:51:27.127391   36778 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 22:51:27.127417   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.127659   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 22:51:27.127685   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:27.130451   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.130869   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.130897   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.131187   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:27.131380   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.131593   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:27.131737   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa Username:docker}
	I1209 22:51:27.214943   36778 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 22:51:27.219203   36778 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 22:51:27.219230   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 22:51:27.219297   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 22:51:27.219368   36778 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 22:51:27.219377   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /etc/ssl/certs/262532.pem
	I1209 22:51:27.219454   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 22:51:27.229647   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:51:27.256219   36778 start.go:296] duration metric: took 128.828108ms for postStartSetup
	I1209 22:51:27.256272   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetConfigRaw
	I1209 22:51:27.256939   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetIP
	I1209 22:51:27.259520   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.259847   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.259871   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.260187   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:51:27.260393   36778 start.go:128] duration metric: took 26.090950019s to createHost
	I1209 22:51:27.260418   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:27.262865   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.263234   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.263258   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.263424   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:27.263637   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.263812   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.263948   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:27.264111   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:27.264266   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:27.264276   36778 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 22:51:27.367958   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733784687.346724594
	
	I1209 22:51:27.367980   36778 fix.go:216] guest clock: 1733784687.346724594
	I1209 22:51:27.367990   36778 fix.go:229] Guest: 2024-12-09 22:51:27.346724594 +0000 UTC Remote: 2024-12-09 22:51:27.260405928 +0000 UTC m=+144.153092475 (delta=86.318666ms)
	I1209 22:51:27.368010   36778 fix.go:200] guest clock delta is within tolerance: 86.318666ms
	I1209 22:51:27.368017   36778 start.go:83] releasing machines lock for "ha-920193-m03", held for 26.19869273s
	I1209 22:51:27.368043   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.368295   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetIP
	I1209 22:51:27.370584   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.370886   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.370925   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.372694   36778 out.go:177] * Found network options:
	I1209 22:51:27.373916   36778 out.go:177]   - NO_PROXY=192.168.39.102,192.168.39.43
	W1209 22:51:27.375001   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	W1209 22:51:27.375023   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 22:51:27.375036   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.375488   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.375695   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.375813   36778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 22:51:27.375854   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	W1209 22:51:27.375861   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	W1209 22:51:27.375898   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 22:51:27.375979   36778 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 22:51:27.376001   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:27.378647   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.378715   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.378991   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.379016   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.379059   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.379077   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.379200   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:27.379345   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:27.379350   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.379608   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:27.379611   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.379810   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:27.379814   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa Username:docker}
	I1209 22:51:27.379979   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa Username:docker}
	I1209 22:51:27.613722   36778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 22:51:27.619553   36778 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 22:51:27.619634   36778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 22:51:27.635746   36778 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 22:51:27.635772   36778 start.go:495] detecting cgroup driver to use...
	I1209 22:51:27.635826   36778 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 22:51:27.653845   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 22:51:27.668792   36778 docker.go:217] disabling cri-docker service (if available) ...
	I1209 22:51:27.668852   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 22:51:27.683547   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 22:51:27.698233   36778 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 22:51:27.824917   36778 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 22:51:27.972308   36778 docker.go:233] disabling docker service ...
	I1209 22:51:27.972387   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 22:51:27.987195   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 22:51:28.000581   36778 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 22:51:28.137925   36778 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 22:51:28.271243   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 22:51:28.285221   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 22:51:28.303416   36778 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 22:51:28.303486   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.314415   36778 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 22:51:28.314487   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.324832   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.336511   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.346899   36778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 22:51:28.358193   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.368602   36778 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.386409   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.397070   36778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 22:51:28.406418   36778 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 22:51:28.406478   36778 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 22:51:28.419010   36778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 22:51:28.428601   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:51:28.547013   36778 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 22:51:28.639590   36778 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 22:51:28.639672   36778 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 22:51:28.644400   36778 start.go:563] Will wait 60s for crictl version
	I1209 22:51:28.644447   36778 ssh_runner.go:195] Run: which crictl
	I1209 22:51:28.648450   36778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 22:51:28.685819   36778 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 22:51:28.685915   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:51:28.713055   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:51:28.743093   36778 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 22:51:28.744486   36778 out.go:177]   - env NO_PROXY=192.168.39.102
	I1209 22:51:28.745701   36778 out.go:177]   - env NO_PROXY=192.168.39.102,192.168.39.43
	I1209 22:51:28.746682   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetIP
	I1209 22:51:28.749397   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:28.749762   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:28.749786   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:28.749968   36778 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 22:51:28.754027   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:51:28.765381   36778 mustload.go:65] Loading cluster: ha-920193
	I1209 22:51:28.765606   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:51:28.765871   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:51:28.765916   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:51:28.781482   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44651
	I1209 22:51:28.781893   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:51:28.782266   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:51:28.782287   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:51:28.782526   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:51:28.782726   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:51:28.784149   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:51:28.784420   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:51:28.784463   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:51:28.799758   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I1209 22:51:28.800232   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:51:28.800726   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:51:28.800752   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:51:28.801514   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:51:28.801709   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:51:28.801891   36778 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193 for IP: 192.168.39.45
	I1209 22:51:28.801903   36778 certs.go:194] generating shared ca certs ...
	I1209 22:51:28.801923   36778 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:51:28.802065   36778 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 22:51:28.802119   36778 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 22:51:28.802134   36778 certs.go:256] generating profile certs ...
	I1209 22:51:28.802225   36778 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key
	I1209 22:51:28.802259   36778 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.e2d8c66a
	I1209 22:51:28.802283   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.e2d8c66a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.43 192.168.39.45 192.168.39.254]
	I1209 22:51:28.918029   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.e2d8c66a ...
	I1209 22:51:28.918070   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.e2d8c66a: {Name:mkb9baad787ad98ea3bbef921d1279904d63e258 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:51:28.918300   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.e2d8c66a ...
	I1209 22:51:28.918321   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.e2d8c66a: {Name:mk6d0bc06f9a231b982576741314205a71ae81f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:51:28.918454   36778 certs.go:381] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.e2d8c66a -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt
	I1209 22:51:28.918653   36778 certs.go:385] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.e2d8c66a -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key
	I1209 22:51:28.918832   36778 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key
	I1209 22:51:28.918852   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 22:51:28.918869   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 22:51:28.918882   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 22:51:28.918897   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 22:51:28.918909   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 22:51:28.918920   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 22:51:28.918930   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 22:51:28.918940   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 22:51:28.918992   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 22:51:28.919020   36778 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 22:51:28.919030   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 22:51:28.919050   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 22:51:28.919071   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 22:51:28.919092   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 22:51:28.919165   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:51:28.919200   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem -> /usr/share/ca-certificates/26253.pem
	I1209 22:51:28.919214   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /usr/share/ca-certificates/262532.pem
	I1209 22:51:28.919226   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:51:28.919256   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:51:28.922496   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:51:28.922907   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:51:28.922924   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:51:28.923121   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:51:28.923334   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:51:28.923493   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:51:28.923637   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:51:28.995976   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1209 22:51:29.001595   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1209 22:51:29.014651   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1209 22:51:29.018976   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1209 22:51:29.031698   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1209 22:51:29.035774   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1209 22:51:29.047740   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1209 22:51:29.055239   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1209 22:51:29.068897   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1209 22:51:29.073278   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1209 22:51:29.083471   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1209 22:51:29.087771   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1209 22:51:29.099200   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 22:51:29.124484   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 22:51:29.146898   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 22:51:29.170925   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 22:51:29.194172   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1209 22:51:29.216851   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 22:51:29.238922   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 22:51:29.261472   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 22:51:29.285294   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 22:51:29.308795   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 22:51:29.332153   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 22:51:29.356878   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1209 22:51:29.373363   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1209 22:51:29.389889   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1209 22:51:29.406229   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1209 22:51:29.422321   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1209 22:51:29.439481   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1209 22:51:29.457534   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1209 22:51:29.474790   36778 ssh_runner.go:195] Run: openssl version
	I1209 22:51:29.480386   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 22:51:29.491491   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 22:51:29.496002   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 22:51:29.496065   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 22:51:29.501912   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 22:51:29.512683   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 22:51:29.523589   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:51:29.527903   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:51:29.527953   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:51:29.533408   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 22:51:29.544241   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 22:51:29.554741   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 22:51:29.559538   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 22:51:29.559622   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 22:51:29.565390   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 22:51:29.576363   36778 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 22:51:29.580324   36778 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 22:51:29.580397   36778 kubeadm.go:934] updating node {m03 192.168.39.45 8443 v1.31.2 crio true true} ...
	I1209 22:51:29.580506   36778 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-920193-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 22:51:29.580552   36778 kube-vip.go:115] generating kube-vip config ...
	I1209 22:51:29.580597   36778 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 22:51:29.601123   36778 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 22:51:29.601198   36778 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 22:51:29.601245   36778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 22:51:29.616816   36778 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1209 22:51:29.616873   36778 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1209 22:51:29.626547   36778 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1209 22:51:29.626551   36778 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1209 22:51:29.626581   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 22:51:29.626608   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 22:51:29.626662   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 22:51:29.626551   36778 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1209 22:51:29.626680   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 22:51:29.626713   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:51:29.630710   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1209 22:51:29.630743   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1209 22:51:29.661909   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 22:51:29.661957   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1209 22:51:29.661993   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1209 22:51:29.662034   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 22:51:29.693387   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1209 22:51:29.693423   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1209 22:51:30.497307   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1209 22:51:30.507919   36778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 22:51:30.525676   36778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 22:51:30.544107   36778 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 22:51:30.560963   36778 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 22:51:30.564949   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:51:30.577803   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:51:30.711834   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:51:30.729249   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:51:30.729790   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:51:30.729852   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:51:30.745894   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40813
	I1209 22:51:30.746400   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:51:30.746903   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:51:30.746923   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:51:30.747244   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:51:30.747474   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:51:30.747637   36778 start.go:317] joinCluster: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:51:30.747751   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1209 22:51:30.747772   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:51:30.750739   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:51:30.751188   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:51:30.751212   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:51:30.751382   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:51:30.751610   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:51:30.751784   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:51:30.751955   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:51:30.921112   36778 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:51:30.921184   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uin3tt.yfutths3ueks8fx7 --discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-920193-m03 --control-plane --apiserver-advertise-address=192.168.39.45 --apiserver-bind-port=8443"
	I1209 22:51:51.979391   36778 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uin3tt.yfutths3ueks8fx7 --discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-920193-m03 --control-plane --apiserver-advertise-address=192.168.39.45 --apiserver-bind-port=8443": (21.05816353s)
	I1209 22:51:51.979426   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1209 22:51:52.687851   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-920193-m03 minikube.k8s.io/updated_at=2024_12_09T22_51_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=ha-920193 minikube.k8s.io/primary=false
	I1209 22:51:52.803074   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-920193-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1209 22:51:52.923717   36778 start.go:319] duration metric: took 22.176073752s to joinCluster
	I1209 22:51:52.923810   36778 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:51:52.924248   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:51:52.925117   36778 out.go:177] * Verifying Kubernetes components...
	I1209 22:51:52.927170   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:51:53.166362   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:51:53.186053   36778 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:51:53.186348   36778 kapi.go:59] client config for ha-920193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key", CAFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c9a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1209 22:51:53.186424   36778 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I1209 22:51:53.186669   36778 node_ready.go:35] waiting up to 6m0s for node "ha-920193-m03" to be "Ready" ...
	I1209 22:51:53.186744   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:53.186755   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:53.186774   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:53.186786   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:53.191049   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:53.686961   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:53.686986   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:53.686997   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:53.687007   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:53.691244   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:54.186985   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:54.187011   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:54.187024   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:54.187030   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:54.265267   36778 round_trippers.go:574] Response Status: 200 OK in 78 milliseconds
	I1209 22:51:54.687008   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:54.687031   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:54.687042   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:54.687050   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:54.690480   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:55.187500   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:55.187525   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:55.187535   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:55.187540   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:55.191178   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:55.191830   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:51:55.687762   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:55.687790   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:55.687802   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:55.687832   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:55.691762   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:56.187494   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:56.187516   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:56.187534   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:56.187543   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:56.191706   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:56.687665   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:56.687691   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:56.687700   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:56.687705   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:56.690707   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:51:57.187710   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:57.187731   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:57.187739   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:57.187743   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:57.191208   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:57.192244   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:51:57.687242   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:57.687266   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:57.687277   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:57.687284   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:57.692231   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:58.187334   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:58.187369   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:58.187404   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:58.187410   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:58.190420   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:51:58.687040   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:58.687060   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:58.687087   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:58.687092   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:58.690458   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:59.187542   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:59.187579   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:59.187590   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:59.187598   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:59.191084   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:59.687057   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:59.687079   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:59.687087   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:59.687090   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:59.762365   36778 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I1209 22:51:59.763672   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:00.187782   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:00.187809   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:00.187824   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:00.187830   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:00.190992   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:00.687396   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:00.687424   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:00.687436   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:00.687443   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:00.690509   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:01.187706   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:01.187726   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:01.187735   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:01.187738   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:01.191284   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:01.687807   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:01.687830   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:01.687838   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:01.687841   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:01.692246   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:02.187139   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:02.187164   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:02.187172   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:02.187176   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:02.191262   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:02.191900   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:02.687239   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:02.687260   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:02.687268   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:02.687272   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:02.690588   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:03.186879   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:03.186901   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:03.186909   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:03.186913   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:03.190077   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:03.686945   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:03.686970   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:03.686976   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:03.686980   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:03.690246   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:04.187422   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:04.187453   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:04.187461   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:04.187475   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:04.190833   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:04.686862   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:04.686888   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:04.686895   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:04.686899   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:04.690474   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:04.691179   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:05.187647   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:05.187672   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:05.187680   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:05.187686   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:05.191042   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:05.687592   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:05.687619   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:05.687631   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:05.687638   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:05.695966   36778 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1209 22:52:06.187585   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:06.187617   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:06.187624   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:06.187627   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:06.190871   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:06.687343   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:06.687365   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:06.687372   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:06.687376   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:06.691065   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:06.691740   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:07.186885   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:07.186908   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:07.186916   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:07.186920   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:07.190452   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:07.687481   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:07.687506   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:07.687517   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:07.687522   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:07.690781   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:08.187842   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:08.187865   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:08.187873   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:08.187877   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:08.190745   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:52:08.687010   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:08.687039   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:08.687047   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:08.687050   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:08.690129   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:09.187057   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:09.187082   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:09.187100   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:09.187105   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:09.190445   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:09.191229   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:09.687849   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:09.687877   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:09.687887   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:09.687896   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:09.691161   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:10.187009   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:10.187030   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:10.187038   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:10.187041   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:10.190809   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:10.687323   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:10.687345   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:10.687353   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:10.687356   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:10.690476   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.187726   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:11.187753   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.187765   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.187771   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.190528   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:52:11.191296   36778 node_ready.go:49] node "ha-920193-m03" has status "Ready":"True"
	I1209 22:52:11.191322   36778 node_ready.go:38] duration metric: took 18.004635224s for node "ha-920193-m03" to be "Ready" ...
	I1209 22:52:11.191347   36778 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:52:11.191433   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:11.191446   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.191457   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.191463   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.197370   36778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 22:52:11.208757   36778 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.208877   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9792g
	I1209 22:52:11.208889   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.208900   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.208908   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.213394   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:11.214171   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.214187   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.214197   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.214204   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.217611   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.218273   36778 pod_ready.go:93] pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.218301   36778 pod_ready.go:82] duration metric: took 9.507458ms for pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.218314   36778 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.218394   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pftgv
	I1209 22:52:11.218405   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.218415   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.218420   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.221934   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.223013   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.223030   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.223037   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.223041   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.226045   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:52:11.226613   36778 pod_ready.go:93] pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.226633   36778 pod_ready.go:82] duration metric: took 8.310101ms for pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.226645   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.226713   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193
	I1209 22:52:11.226722   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.226729   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.226736   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.232210   36778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 22:52:11.233134   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.233148   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.233156   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.233159   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.236922   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.237775   36778 pod_ready.go:93] pod "etcd-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.237796   36778 pod_ready.go:82] duration metric: took 11.143234ms for pod "etcd-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.237806   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.237867   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193-m02
	I1209 22:52:11.237875   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.237882   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.237887   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.242036   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:11.242839   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:11.242858   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.242869   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.242877   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.246444   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.247204   36778 pod_ready.go:93] pod "etcd-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.247221   36778 pod_ready.go:82] duration metric: took 9.409944ms for pod "etcd-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.247231   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.388592   36778 request.go:632] Waited for 141.281694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193-m03
	I1209 22:52:11.388678   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193-m03
	I1209 22:52:11.388690   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.388704   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.388713   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.392012   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.587869   36778 request.go:632] Waited for 195.273739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:11.587951   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:11.587957   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.587964   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.587968   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.591423   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.592154   36778 pod_ready.go:93] pod "etcd-ha-920193-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.592174   36778 pod_ready.go:82] duration metric: took 344.933564ms for pod "etcd-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.592194   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.788563   36778 request.go:632] Waited for 196.298723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193
	I1209 22:52:11.788656   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193
	I1209 22:52:11.788669   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.788679   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.788687   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.792940   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:11.988037   36778 request.go:632] Waited for 194.354692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.988107   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.988113   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.988121   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.988125   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.992370   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:11.992995   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.993012   36778 pod_ready.go:82] duration metric: took 400.807496ms for pod "kube-apiserver-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.993021   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.188095   36778 request.go:632] Waited for 195.006713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m02
	I1209 22:52:12.188167   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m02
	I1209 22:52:12.188172   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.188180   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.188185   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.191780   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:12.388747   36778 request.go:632] Waited for 196.170639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:12.388823   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:12.388829   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.388856   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.388869   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.392301   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:12.392894   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:12.392921   36778 pod_ready.go:82] duration metric: took 399.892746ms for pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.392938   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.587836   36778 request.go:632] Waited for 194.810311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m03
	I1209 22:52:12.587925   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m03
	I1209 22:52:12.587934   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.587948   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.587958   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.591021   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:12.787947   36778 request.go:632] Waited for 196.297135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:12.788010   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:12.788016   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.788024   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.788032   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.791450   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:12.792173   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:12.792194   36778 pod_ready.go:82] duration metric: took 399.248841ms for pod "kube-apiserver-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.792210   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.988330   36778 request.go:632] Waited for 196.053217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193
	I1209 22:52:12.988409   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193
	I1209 22:52:12.988415   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.988423   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.988428   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.992155   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.188272   36778 request.go:632] Waited for 195.156662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:13.188340   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:13.188346   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.188354   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.188362   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.192008   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.192630   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:13.192650   36778 pod_ready.go:82] duration metric: took 400.432601ms for pod "kube-controller-manager-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.192661   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.388559   36778 request.go:632] Waited for 195.821537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m02
	I1209 22:52:13.388616   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m02
	I1209 22:52:13.388621   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.388629   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.388634   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.391883   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.587935   36778 request.go:632] Waited for 195.28191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:13.587989   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:13.587994   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.588007   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.588010   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.591630   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.592151   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:13.592169   36778 pod_ready.go:82] duration metric: took 399.499137ms for pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.592180   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.788332   36778 request.go:632] Waited for 196.084844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m03
	I1209 22:52:13.788412   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m03
	I1209 22:52:13.788419   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.788429   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.788435   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.792121   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.988484   36778 request.go:632] Waited for 195.461528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:13.988555   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:13.988567   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.988579   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.988589   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.992243   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.992809   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:13.992827   36778 pod_ready.go:82] duration metric: took 400.64066ms for pod "kube-controller-manager-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.992842   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lntbt" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.187961   36778 request.go:632] Waited for 195.049639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lntbt
	I1209 22:52:14.188050   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lntbt
	I1209 22:52:14.188058   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.188071   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.188080   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.191692   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:14.388730   36778 request.go:632] Waited for 196.239352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:14.388788   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:14.388802   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.388813   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.388817   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.392311   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:14.392971   36778 pod_ready.go:93] pod "kube-proxy-lntbt" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:14.392992   36778 pod_ready.go:82] duration metric: took 400.138793ms for pod "kube-proxy-lntbt" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.393007   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pr7zk" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.588013   36778 request.go:632] Waited for 194.93384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pr7zk
	I1209 22:52:14.588077   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pr7zk
	I1209 22:52:14.588082   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.588095   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.588102   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.591447   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:14.788698   36778 request.go:632] Waited for 196.390033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:14.788766   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:14.788775   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.788787   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.788800   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.792338   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:14.793156   36778 pod_ready.go:93] pod "kube-proxy-pr7zk" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:14.793181   36778 pod_ready.go:82] duration metric: took 400.165156ms for pod "kube-proxy-pr7zk" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.793195   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r8nhm" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.988348   36778 request.go:632] Waited for 195.014123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r8nhm
	I1209 22:52:14.988427   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r8nhm
	I1209 22:52:14.988434   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.988444   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.988457   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.993239   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:15.188292   36778 request.go:632] Waited for 194.264701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:15.188390   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:15.188403   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.188418   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.188429   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.192041   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.192565   36778 pod_ready.go:93] pod "kube-proxy-r8nhm" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:15.192584   36778 pod_ready.go:82] duration metric: took 399.381952ms for pod "kube-proxy-r8nhm" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.192595   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.388147   36778 request.go:632] Waited for 195.488765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193
	I1209 22:52:15.388224   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193
	I1209 22:52:15.388233   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.388240   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.388248   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.391603   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.588758   36778 request.go:632] Waited for 196.3144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:15.588837   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:15.588843   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.588850   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.588860   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.592681   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.593301   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:15.593327   36778 pod_ready.go:82] duration metric: took 400.724982ms for pod "kube-scheduler-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.593343   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.788627   36778 request.go:632] Waited for 195.204455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m02
	I1209 22:52:15.788686   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m02
	I1209 22:52:15.788691   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.788699   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.788704   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.792349   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.988329   36778 request.go:632] Waited for 195.36216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:15.988396   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:15.988402   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.988408   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.988412   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.991578   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.992400   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:15.992418   36778 pod_ready.go:82] duration metric: took 399.067203ms for pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.992428   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:16.188427   36778 request.go:632] Waited for 195.939633ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m03
	I1209 22:52:16.188480   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m03
	I1209 22:52:16.188489   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.188496   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.188501   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.192012   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:16.388006   36778 request.go:632] Waited for 195.368293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:16.388057   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:16.388062   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.388069   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.388073   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.392950   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:16.393391   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:16.393409   36778 pod_ready.go:82] duration metric: took 400.975145ms for pod "kube-scheduler-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:16.393420   36778 pod_ready.go:39] duration metric: took 5.202056835s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:52:16.393435   36778 api_server.go:52] waiting for apiserver process to appear ...
	I1209 22:52:16.393482   36778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 22:52:16.409725   36778 api_server.go:72] duration metric: took 23.485873684s to wait for apiserver process to appear ...
	I1209 22:52:16.409759   36778 api_server.go:88] waiting for apiserver healthz status ...
	I1209 22:52:16.409786   36778 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I1209 22:52:16.414224   36778 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I1209 22:52:16.414307   36778 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I1209 22:52:16.414316   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.414324   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.414330   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.415229   36778 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1209 22:52:16.415280   36778 api_server.go:141] control plane version: v1.31.2
	I1209 22:52:16.415291   36778 api_server.go:131] duration metric: took 5.527187ms to wait for apiserver health ...
	I1209 22:52:16.415298   36778 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 22:52:16.588740   36778 request.go:632] Waited for 173.378808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:16.588806   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:16.588811   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.588818   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.588822   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.595459   36778 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 22:52:16.602952   36778 system_pods.go:59] 24 kube-system pods found
	I1209 22:52:16.602979   36778 system_pods.go:61] "coredns-7c65d6cfc9-9792g" [61c6326d-4d13-49b7-9fae-c4f8906c3c7e] Running
	I1209 22:52:16.602985   36778 system_pods.go:61] "coredns-7c65d6cfc9-pftgv" [eb45cffe-b666-435b-ada3-605735ee43c1] Running
	I1209 22:52:16.602989   36778 system_pods.go:61] "etcd-ha-920193" [70c28c05-bf86-4a56-9863-0e836de39230] Running
	I1209 22:52:16.602993   36778 system_pods.go:61] "etcd-ha-920193-m02" [b1746065-266b-487c-8aa5-629d5b421a3b] Running
	I1209 22:52:16.602996   36778 system_pods.go:61] "etcd-ha-920193-m03" [f9f5fa3d-3d06-47a8-b3f2-ed544b1cfb74] Running
	I1209 22:52:16.603001   36778 system_pods.go:61] "kindnet-7bbbc" [76f14f4c-3759-45b3-8161-78d85c44b5a6] Running
	I1209 22:52:16.603004   36778 system_pods.go:61] "kindnet-drj9q" [662d116b-e7ec-437c-9eeb-207cee5beecd] Running
	I1209 22:52:16.603007   36778 system_pods.go:61] "kindnet-rcctv" [e9580e78-cfcb-45b6-9e90-f37ce3881c6b] Running
	I1209 22:52:16.603010   36778 system_pods.go:61] "kube-apiserver-ha-920193" [9c15e33e-f7c1-4b0b-84de-718043e0ea1b] Running
	I1209 22:52:16.603015   36778 system_pods.go:61] "kube-apiserver-ha-920193-m02" [d05fff94-7b3f-4127-8164-3c1a838bb71c] Running
	I1209 22:52:16.603018   36778 system_pods.go:61] "kube-apiserver-ha-920193-m03" [1a55c551-c7bf-4b9f-ac49-e750cec2a92f] Running
	I1209 22:52:16.603022   36778 system_pods.go:61] "kube-controller-manager-ha-920193" [4a411c41-cba8-4cb3-beec-fcd3f9a8ade1] Running
	I1209 22:52:16.603026   36778 system_pods.go:61] "kube-controller-manager-ha-920193-m02" [57181858-4c99-4afa-b94c-4ba320fc293d] Running
	I1209 22:52:16.603031   36778 system_pods.go:61] "kube-controller-manager-ha-920193-m03" [d5278ea9-8878-4c18-bb6a-abb82a5bde12] Running
	I1209 22:52:16.603035   36778 system_pods.go:61] "kube-proxy-lntbt" [8eb5538d-7ac0-4949-b7f9-29d5530f4521] Running
	I1209 22:52:16.603038   36778 system_pods.go:61] "kube-proxy-pr7zk" [8236ddcf-f836-449c-8401-f799dd1a94f8] Running
	I1209 22:52:16.603041   36778 system_pods.go:61] "kube-proxy-r8nhm" [355570de-1c30-4aa2-a56c-a06639cc339c] Running
	I1209 22:52:16.603044   36778 system_pods.go:61] "kube-scheduler-ha-920193" [60b5be1e-f9fe-4c30-b435-7d321c484bc4] Running
	I1209 22:52:16.603047   36778 system_pods.go:61] "kube-scheduler-ha-920193-m02" [a35e79b6-6cf2-4669-95fb-c1f358c811d2] Running
	I1209 22:52:16.603050   36778 system_pods.go:61] "kube-scheduler-ha-920193-m03" [27607c05-10b3-4d82-ba5e-2f3e7a44d2ad] Running
	I1209 22:52:16.603054   36778 system_pods.go:61] "kube-vip-ha-920193" [4f238d3a-6219-4e4d-89a8-22f73def8440] Running
	I1209 22:52:16.603057   36778 system_pods.go:61] "kube-vip-ha-920193-m02" [0fe8360d-b418-47dd-bcce-a8d5a6c4c29d] Running
	I1209 22:52:16.603060   36778 system_pods.go:61] "kube-vip-ha-920193-m03" [7a747c56-20d6-41bb-b7c1-1db054ee699c] Running
	I1209 22:52:16.603062   36778 system_pods.go:61] "storage-provisioner" [cb31984d-7602-406c-8c77-ce8571cdaa52] Running
	I1209 22:52:16.603068   36778 system_pods.go:74] duration metric: took 187.765008ms to wait for pod list to return data ...
	I1209 22:52:16.603077   36778 default_sa.go:34] waiting for default service account to be created ...
	I1209 22:52:16.788510   36778 request.go:632] Waited for 185.359314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1209 22:52:16.788566   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1209 22:52:16.788571   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.788579   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.788586   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.791991   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:16.792139   36778 default_sa.go:45] found service account: "default"
	I1209 22:52:16.792154   36778 default_sa.go:55] duration metric: took 189.072143ms for default service account to be created ...
	I1209 22:52:16.792164   36778 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 22:52:16.988637   36778 request.go:632] Waited for 196.396881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:16.988723   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:16.988732   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.988740   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.988743   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.995659   36778 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 22:52:17.002627   36778 system_pods.go:86] 24 kube-system pods found
	I1209 22:52:17.002660   36778 system_pods.go:89] "coredns-7c65d6cfc9-9792g" [61c6326d-4d13-49b7-9fae-c4f8906c3c7e] Running
	I1209 22:52:17.002667   36778 system_pods.go:89] "coredns-7c65d6cfc9-pftgv" [eb45cffe-b666-435b-ada3-605735ee43c1] Running
	I1209 22:52:17.002672   36778 system_pods.go:89] "etcd-ha-920193" [70c28c05-bf86-4a56-9863-0e836de39230] Running
	I1209 22:52:17.002676   36778 system_pods.go:89] "etcd-ha-920193-m02" [b1746065-266b-487c-8aa5-629d5b421a3b] Running
	I1209 22:52:17.002679   36778 system_pods.go:89] "etcd-ha-920193-m03" [f9f5fa3d-3d06-47a8-b3f2-ed544b1cfb74] Running
	I1209 22:52:17.002683   36778 system_pods.go:89] "kindnet-7bbbc" [76f14f4c-3759-45b3-8161-78d85c44b5a6] Running
	I1209 22:52:17.002686   36778 system_pods.go:89] "kindnet-drj9q" [662d116b-e7ec-437c-9eeb-207cee5beecd] Running
	I1209 22:52:17.002690   36778 system_pods.go:89] "kindnet-rcctv" [e9580e78-cfcb-45b6-9e90-f37ce3881c6b] Running
	I1209 22:52:17.002693   36778 system_pods.go:89] "kube-apiserver-ha-920193" [9c15e33e-f7c1-4b0b-84de-718043e0ea1b] Running
	I1209 22:52:17.002697   36778 system_pods.go:89] "kube-apiserver-ha-920193-m02" [d05fff94-7b3f-4127-8164-3c1a838bb71c] Running
	I1209 22:52:17.002700   36778 system_pods.go:89] "kube-apiserver-ha-920193-m03" [1a55c551-c7bf-4b9f-ac49-e750cec2a92f] Running
	I1209 22:52:17.002703   36778 system_pods.go:89] "kube-controller-manager-ha-920193" [4a411c41-cba8-4cb3-beec-fcd3f9a8ade1] Running
	I1209 22:52:17.002707   36778 system_pods.go:89] "kube-controller-manager-ha-920193-m02" [57181858-4c99-4afa-b94c-4ba320fc293d] Running
	I1209 22:52:17.002710   36778 system_pods.go:89] "kube-controller-manager-ha-920193-m03" [d5278ea9-8878-4c18-bb6a-abb82a5bde12] Running
	I1209 22:52:17.002717   36778 system_pods.go:89] "kube-proxy-lntbt" [8eb5538d-7ac0-4949-b7f9-29d5530f4521] Running
	I1209 22:52:17.002720   36778 system_pods.go:89] "kube-proxy-pr7zk" [8236ddcf-f836-449c-8401-f799dd1a94f8] Running
	I1209 22:52:17.002723   36778 system_pods.go:89] "kube-proxy-r8nhm" [355570de-1c30-4aa2-a56c-a06639cc339c] Running
	I1209 22:52:17.002726   36778 system_pods.go:89] "kube-scheduler-ha-920193" [60b5be1e-f9fe-4c30-b435-7d321c484bc4] Running
	I1209 22:52:17.002730   36778 system_pods.go:89] "kube-scheduler-ha-920193-m02" [a35e79b6-6cf2-4669-95fb-c1f358c811d2] Running
	I1209 22:52:17.002734   36778 system_pods.go:89] "kube-scheduler-ha-920193-m03" [27607c05-10b3-4d82-ba5e-2f3e7a44d2ad] Running
	I1209 22:52:17.002738   36778 system_pods.go:89] "kube-vip-ha-920193" [4f238d3a-6219-4e4d-89a8-22f73def8440] Running
	I1209 22:52:17.002740   36778 system_pods.go:89] "kube-vip-ha-920193-m02" [0fe8360d-b418-47dd-bcce-a8d5a6c4c29d] Running
	I1209 22:52:17.002744   36778 system_pods.go:89] "kube-vip-ha-920193-m03" [7a747c56-20d6-41bb-b7c1-1db054ee699c] Running
	I1209 22:52:17.002747   36778 system_pods.go:89] "storage-provisioner" [cb31984d-7602-406c-8c77-ce8571cdaa52] Running
	I1209 22:52:17.002753   36778 system_pods.go:126] duration metric: took 210.583954ms to wait for k8s-apps to be running ...
	I1209 22:52:17.002760   36778 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 22:52:17.002802   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:52:17.018265   36778 system_svc.go:56] duration metric: took 15.492212ms WaitForService to wait for kubelet
	I1209 22:52:17.018301   36778 kubeadm.go:582] duration metric: took 24.09445385s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 22:52:17.018323   36778 node_conditions.go:102] verifying NodePressure condition ...
	I1209 22:52:17.188743   36778 request.go:632] Waited for 170.323133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I1209 22:52:17.188800   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I1209 22:52:17.188807   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:17.188816   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:17.188823   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:17.193008   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:17.194620   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:52:17.194642   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:52:17.194653   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:52:17.194657   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:52:17.194661   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:52:17.194664   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:52:17.194668   36778 node_conditions.go:105] duration metric: took 176.339707ms to run NodePressure ...
	I1209 22:52:17.194678   36778 start.go:241] waiting for startup goroutines ...
	I1209 22:52:17.194700   36778 start.go:255] writing updated cluster config ...
	I1209 22:52:17.194994   36778 ssh_runner.go:195] Run: rm -f paused
	I1209 22:52:17.247192   36778 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 22:52:17.250117   36778 out.go:177] * Done! kubectl is now configured to use "ha-920193" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.221766825Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784962221656662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=906f28c6-201f-4750-8a83-c79b487c6101 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.222321072Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abb411d7-2d43-437a-a003-77c22d6e4760 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.222396829Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abb411d7-2d43-437a-a003-77c22d6e4760 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.223952442Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2098445c3438f8b0d5a49c50ab807d4692815a97d8732b20b47c76ff9381a76,PodSandboxId:32c399f593c298fd59b6593cd718d6d5d6899cacbe2ca9a451eae0cce6586d32,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733784740856215420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4dbs2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 713f8602-6e31-4d2c-b66e-ddcc856f3d96,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c,PodSandboxId:28a5e497d421cd446ab1fa053ce2a1d367f385d8d7480426d8bdb7269cb0ac01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606136476368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9792g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61c6326d-4d13-49b7-9fae-c4f8906c3c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a,PodSandboxId:8986bab4f9538153cc0450face96ec55ad84843bf30b83618639ea1d89ad002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606113487368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pftgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
eb45cffe-b666-435b-ada3-605735ee43c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a62ed3f6ca851b9e241ceaebd0beefc01fd7dbe3845409d02a901b02942a75,PodSandboxId:24f95152f109449addd385310712e35defdffe2d831c55b614cb2f59527ce4e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733784605136123919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb31984d-7602-406c-8c77-ce8571cdaa52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a,PodSandboxId:91e324c9c3171a25cf06362a62d77103a523e6d0da9b6a82fa14b606f4828c5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733784593211202860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rcctv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9580e78-cfcb-45b6-9e90-f37ce3881c6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678,PodSandboxId:7d30b07a36a6c7c6f07f44ce91d2e958657f533c11a16e501836ebf4eab9c339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733784589
863424703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8nhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355570de-1c30-4aa2-a56c-a06639cc339c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845a7a9380501aac2e0992c1b802cf4bc3bd5aea4f43d2d2d4f10c946c1c66f,PodSandboxId:dcec6011252c41298b12a101fb51c8b91c1b38ea9bc0151b6980f69527186ed1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378458116
6459615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d35c5d117675d3f6b1e3496412ccf95,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581,PodSandboxId:a053c05339f972a11da8b5a3a39e9c14b9110864ea563f08bd07327fd6e6af10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733784578334605737,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47ddf9d5365fec6250c855056c8f531,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a,PodSandboxId:7dd45ba230f9006172f49368d56abbfa8ac1f3fd32cbd4ded30df3d9e90be698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733784578293475702,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5422d6506f773ffbf1f21f835466832,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9,PodSandboxId:5b9cd68863c1403e489bdf6102dc4a79d4207d1f0150b4a356a2e0ad3bfc1b7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733784578260543093,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953f998529f26865f22e300e139c6849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963,PodSandboxId:ba6c2156966ab967451ebfb65361e3245e35dc076d6271f2db162ddc9498d085,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733784578211411837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f1665c4aeae8e7befddb7a386efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abb411d7-2d43-437a-a003-77c22d6e4760 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.262929738Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9c70c09-80c1-4370-8c86-3a606441a639 name=/runtime.v1.RuntimeService/Version
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.263003101Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9c70c09-80c1-4370-8c86-3a606441a639 name=/runtime.v1.RuntimeService/Version
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.264068737Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9079b85e-bb9f-4cb6-bba3-7487c818ccfc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.264756800Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784962264729508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9079b85e-bb9f-4cb6-bba3-7487c818ccfc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.265296564Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37432d42-1203-4266-b512-7c71951ca78f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.265351834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37432d42-1203-4266-b512-7c71951ca78f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.265837416Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2098445c3438f8b0d5a49c50ab807d4692815a97d8732b20b47c76ff9381a76,PodSandboxId:32c399f593c298fd59b6593cd718d6d5d6899cacbe2ca9a451eae0cce6586d32,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733784740856215420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4dbs2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 713f8602-6e31-4d2c-b66e-ddcc856f3d96,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c,PodSandboxId:28a5e497d421cd446ab1fa053ce2a1d367f385d8d7480426d8bdb7269cb0ac01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606136476368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9792g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61c6326d-4d13-49b7-9fae-c4f8906c3c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a,PodSandboxId:8986bab4f9538153cc0450face96ec55ad84843bf30b83618639ea1d89ad002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606113487368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pftgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
eb45cffe-b666-435b-ada3-605735ee43c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a62ed3f6ca851b9e241ceaebd0beefc01fd7dbe3845409d02a901b02942a75,PodSandboxId:24f95152f109449addd385310712e35defdffe2d831c55b614cb2f59527ce4e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733784605136123919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb31984d-7602-406c-8c77-ce8571cdaa52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a,PodSandboxId:91e324c9c3171a25cf06362a62d77103a523e6d0da9b6a82fa14b606f4828c5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733784593211202860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rcctv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9580e78-cfcb-45b6-9e90-f37ce3881c6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678,PodSandboxId:7d30b07a36a6c7c6f07f44ce91d2e958657f533c11a16e501836ebf4eab9c339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733784589
863424703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8nhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355570de-1c30-4aa2-a56c-a06639cc339c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845a7a9380501aac2e0992c1b802cf4bc3bd5aea4f43d2d2d4f10c946c1c66f,PodSandboxId:dcec6011252c41298b12a101fb51c8b91c1b38ea9bc0151b6980f69527186ed1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378458116
6459615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d35c5d117675d3f6b1e3496412ccf95,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581,PodSandboxId:a053c05339f972a11da8b5a3a39e9c14b9110864ea563f08bd07327fd6e6af10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733784578334605737,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47ddf9d5365fec6250c855056c8f531,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a,PodSandboxId:7dd45ba230f9006172f49368d56abbfa8ac1f3fd32cbd4ded30df3d9e90be698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733784578293475702,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5422d6506f773ffbf1f21f835466832,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9,PodSandboxId:5b9cd68863c1403e489bdf6102dc4a79d4207d1f0150b4a356a2e0ad3bfc1b7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733784578260543093,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953f998529f26865f22e300e139c6849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963,PodSandboxId:ba6c2156966ab967451ebfb65361e3245e35dc076d6271f2db162ddc9498d085,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733784578211411837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f1665c4aeae8e7befddb7a386efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37432d42-1203-4266-b512-7c71951ca78f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.307122919Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b590e615-4fd1-4401-bec1-f825498b921d name=/runtime.v1.RuntimeService/Version
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.307194900Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b590e615-4fd1-4401-bec1-f825498b921d name=/runtime.v1.RuntimeService/Version
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.308512971Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bb36773e-03fc-493e-a385-22975c960264 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.309025486Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784962308997832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb36773e-03fc-493e-a385-22975c960264 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.309803740Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=420cc763-b7a1-4ca7-a6ec-2ef49c77522d name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.309859005Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=420cc763-b7a1-4ca7-a6ec-2ef49c77522d name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.310086990Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2098445c3438f8b0d5a49c50ab807d4692815a97d8732b20b47c76ff9381a76,PodSandboxId:32c399f593c298fd59b6593cd718d6d5d6899cacbe2ca9a451eae0cce6586d32,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733784740856215420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4dbs2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 713f8602-6e31-4d2c-b66e-ddcc856f3d96,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c,PodSandboxId:28a5e497d421cd446ab1fa053ce2a1d367f385d8d7480426d8bdb7269cb0ac01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606136476368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9792g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61c6326d-4d13-49b7-9fae-c4f8906c3c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a,PodSandboxId:8986bab4f9538153cc0450face96ec55ad84843bf30b83618639ea1d89ad002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606113487368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pftgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
eb45cffe-b666-435b-ada3-605735ee43c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a62ed3f6ca851b9e241ceaebd0beefc01fd7dbe3845409d02a901b02942a75,PodSandboxId:24f95152f109449addd385310712e35defdffe2d831c55b614cb2f59527ce4e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733784605136123919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb31984d-7602-406c-8c77-ce8571cdaa52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a,PodSandboxId:91e324c9c3171a25cf06362a62d77103a523e6d0da9b6a82fa14b606f4828c5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733784593211202860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rcctv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9580e78-cfcb-45b6-9e90-f37ce3881c6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678,PodSandboxId:7d30b07a36a6c7c6f07f44ce91d2e958657f533c11a16e501836ebf4eab9c339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733784589
863424703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8nhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355570de-1c30-4aa2-a56c-a06639cc339c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845a7a9380501aac2e0992c1b802cf4bc3bd5aea4f43d2d2d4f10c946c1c66f,PodSandboxId:dcec6011252c41298b12a101fb51c8b91c1b38ea9bc0151b6980f69527186ed1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378458116
6459615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d35c5d117675d3f6b1e3496412ccf95,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581,PodSandboxId:a053c05339f972a11da8b5a3a39e9c14b9110864ea563f08bd07327fd6e6af10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733784578334605737,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47ddf9d5365fec6250c855056c8f531,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a,PodSandboxId:7dd45ba230f9006172f49368d56abbfa8ac1f3fd32cbd4ded30df3d9e90be698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733784578293475702,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5422d6506f773ffbf1f21f835466832,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9,PodSandboxId:5b9cd68863c1403e489bdf6102dc4a79d4207d1f0150b4a356a2e0ad3bfc1b7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733784578260543093,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953f998529f26865f22e300e139c6849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963,PodSandboxId:ba6c2156966ab967451ebfb65361e3245e35dc076d6271f2db162ddc9498d085,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733784578211411837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f1665c4aeae8e7befddb7a386efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=420cc763-b7a1-4ca7-a6ec-2ef49c77522d name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.347498825Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a526ed0d-8ec7-4a87-8ea4-9bfa27def68b name=/runtime.v1.RuntimeService/Version
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.347592967Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a526ed0d-8ec7-4a87-8ea4-9bfa27def68b name=/runtime.v1.RuntimeService/Version
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.349352277Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf26c537-9bb3-459c-8d10-4b711845294a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.350949312Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784962350917650,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf26c537-9bb3-459c-8d10-4b711845294a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.351434679Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed53659d-6afa-4a54-9ab6-c0f87abc3175 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.351486872Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed53659d-6afa-4a54-9ab6-c0f87abc3175 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:02 ha-920193 crio[663]: time="2024-12-09 22:56:02.351755423Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2098445c3438f8b0d5a49c50ab807d4692815a97d8732b20b47c76ff9381a76,PodSandboxId:32c399f593c298fd59b6593cd718d6d5d6899cacbe2ca9a451eae0cce6586d32,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733784740856215420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4dbs2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 713f8602-6e31-4d2c-b66e-ddcc856f3d96,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c,PodSandboxId:28a5e497d421cd446ab1fa053ce2a1d367f385d8d7480426d8bdb7269cb0ac01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606136476368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9792g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61c6326d-4d13-49b7-9fae-c4f8906c3c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a,PodSandboxId:8986bab4f9538153cc0450face96ec55ad84843bf30b83618639ea1d89ad002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606113487368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pftgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
eb45cffe-b666-435b-ada3-605735ee43c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a62ed3f6ca851b9e241ceaebd0beefc01fd7dbe3845409d02a901b02942a75,PodSandboxId:24f95152f109449addd385310712e35defdffe2d831c55b614cb2f59527ce4e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733784605136123919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb31984d-7602-406c-8c77-ce8571cdaa52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a,PodSandboxId:91e324c9c3171a25cf06362a62d77103a523e6d0da9b6a82fa14b606f4828c5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733784593211202860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rcctv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9580e78-cfcb-45b6-9e90-f37ce3881c6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678,PodSandboxId:7d30b07a36a6c7c6f07f44ce91d2e958657f533c11a16e501836ebf4eab9c339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733784589
863424703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8nhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355570de-1c30-4aa2-a56c-a06639cc339c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845a7a9380501aac2e0992c1b802cf4bc3bd5aea4f43d2d2d4f10c946c1c66f,PodSandboxId:dcec6011252c41298b12a101fb51c8b91c1b38ea9bc0151b6980f69527186ed1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378458116
6459615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d35c5d117675d3f6b1e3496412ccf95,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581,PodSandboxId:a053c05339f972a11da8b5a3a39e9c14b9110864ea563f08bd07327fd6e6af10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733784578334605737,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47ddf9d5365fec6250c855056c8f531,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a,PodSandboxId:7dd45ba230f9006172f49368d56abbfa8ac1f3fd32cbd4ded30df3d9e90be698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733784578293475702,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5422d6506f773ffbf1f21f835466832,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9,PodSandboxId:5b9cd68863c1403e489bdf6102dc4a79d4207d1f0150b4a356a2e0ad3bfc1b7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733784578260543093,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953f998529f26865f22e300e139c6849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963,PodSandboxId:ba6c2156966ab967451ebfb65361e3245e35dc076d6271f2db162ddc9498d085,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733784578211411837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f1665c4aeae8e7befddb7a386efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed53659d-6afa-4a54-9ab6-c0f87abc3175 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b2098445c3438       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   32c399f593c29       busybox-7dff88458-4dbs2
	14b80feac0f9a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   28a5e497d421c       coredns-7c65d6cfc9-9792g
	6bdcee2ff30bb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   8986bab4f9538       coredns-7c65d6cfc9-pftgv
	a6a62ed3f6ca8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   24f95152f1094       storage-provisioner
	d26f562ad5527       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3    6 minutes ago       Running             kindnet-cni               0                   91e324c9c3171       kindnet-rcctv
	233aa49869db4       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   7d30b07a36a6c       kube-proxy-r8nhm
	b845a7a938050       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   dcec6011252c4       kube-vip-ha-920193
	2c5a043b38715       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   a053c05339f97       kube-apiserver-ha-920193
	f0a29f1dc44e4       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   7dd45ba230f90       kube-controller-manager-ha-920193
	b8197a166eeaf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   5b9cd68863c14       etcd-ha-920193
	6ee0fecee78f0       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   ba6c2156966ab       kube-scheduler-ha-920193
	
	
	==> coredns [14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c] <==
	[INFO] 10.244.2.2:60285 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00013048s
	[INFO] 10.244.0.4:42105 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201273s
	[INFO] 10.244.0.4:33722 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003973627s
	[INFO] 10.244.0.4:50780 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003385872s
	[INFO] 10.244.0.4:46762 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000330906s
	[INFO] 10.244.0.4:41821 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099413s
	[INFO] 10.244.1.2:38814 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000240081s
	[INFO] 10.244.1.2:51472 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001124121s
	[INFO] 10.244.1.2:49496 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094508s
	[INFO] 10.244.2.2:44597 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168981s
	[INFO] 10.244.2.2:56334 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001450617s
	[INFO] 10.244.2.2:52317 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077228s
	[INFO] 10.244.0.4:57299 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133066s
	[INFO] 10.244.0.4:56277 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119106s
	[INFO] 10.244.0.4:45466 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000040838s
	[INFO] 10.244.1.2:44460 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200839s
	[INFO] 10.244.2.2:38498 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135133s
	[INFO] 10.244.2.2:50433 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00021653s
	[INFO] 10.244.2.2:49338 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098224s
	[INFO] 10.244.0.4:33757 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178322s
	[INFO] 10.244.0.4:48357 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000197259s
	[INFO] 10.244.0.4:36014 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000126459s
	[INFO] 10.244.1.2:50940 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000306385s
	[INFO] 10.244.2.2:39693 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191708s
	[INFO] 10.244.2.2:43130 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156713s
	
	
	==> coredns [6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a] <==
	[INFO] 10.244.2.2:53803 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001802154s
	[INFO] 10.244.0.4:53804 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136883s
	[INFO] 10.244.0.4:33536 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133128s
	[INFO] 10.244.0.4:40697 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109987s
	[INFO] 10.244.1.2:60686 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001746087s
	[INFO] 10.244.1.2:57981 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000176425s
	[INFO] 10.244.1.2:42922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001279s
	[INFO] 10.244.1.2:49248 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000199359s
	[INFO] 10.244.1.2:56349 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176613s
	[INFO] 10.244.2.2:37288 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194316s
	[INFO] 10.244.2.2:36807 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001853178s
	[INFO] 10.244.2.2:47892 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097133s
	[INFO] 10.244.2.2:50492 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000249713s
	[INFO] 10.244.2.2:42642 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102673s
	[INFO] 10.244.0.4:45744 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170409s
	[INFO] 10.244.1.2:36488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227015s
	[INFO] 10.244.1.2:37416 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118932s
	[INFO] 10.244.1.2:48536 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176061s
	[INFO] 10.244.2.2:47072 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110597s
	[INFO] 10.244.0.4:58052 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000268133s
	[INFO] 10.244.1.2:38540 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000277422s
	[INFO] 10.244.1.2:55804 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000232786s
	[INFO] 10.244.1.2:35281 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000214405s
	[INFO] 10.244.2.2:37415 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174588s
	[INFO] 10.244.2.2:32790 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097554s
	
	
	==> describe nodes <==
	Name:               ha-920193
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-920193
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=ha-920193
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T22_49_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 22:49:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-920193
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 22:56:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 22:52:47 +0000   Mon, 09 Dec 2024 22:49:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 22:52:47 +0000   Mon, 09 Dec 2024 22:49:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 22:52:47 +0000   Mon, 09 Dec 2024 22:49:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 22:52:47 +0000   Mon, 09 Dec 2024 22:50:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-920193
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9825096d628741caa811f99c10cc6460
	  System UUID:                9825096d-6287-41ca-a811-f99c10cc6460
	  Boot ID:                    7af2b544-54c4-4e33-8dc8-e2313bb29389
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4dbs2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 coredns-7c65d6cfc9-9792g             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m13s
	  kube-system                 coredns-7c65d6cfc9-pftgv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m13s
	  kube-system                 etcd-ha-920193                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m18s
	  kube-system                 kindnet-rcctv                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m14s
	  kube-system                 kube-apiserver-ha-920193             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-controller-manager-ha-920193    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-proxy-r8nhm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-scheduler-ha-920193             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-vip-ha-920193                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m12s  kube-proxy       
	  Normal  Starting                 6m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m18s  kubelet          Node ha-920193 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m18s  kubelet          Node ha-920193 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m18s  kubelet          Node ha-920193 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m14s  node-controller  Node ha-920193 event: Registered Node ha-920193 in Controller
	  Normal  NodeReady                5m58s  kubelet          Node ha-920193 status is now: NodeReady
	  Normal  RegisteredNode           5m19s  node-controller  Node ha-920193 event: Registered Node ha-920193 in Controller
	  Normal  RegisteredNode           4m5s   node-controller  Node ha-920193 event: Registered Node ha-920193 in Controller
	
	
	Name:               ha-920193-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-920193-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=ha-920193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T22_50_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 22:50:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-920193-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 22:53:29 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 09 Dec 2024 22:52:38 +0000   Mon, 09 Dec 2024 22:54:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 09 Dec 2024 22:52:38 +0000   Mon, 09 Dec 2024 22:54:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 09 Dec 2024 22:52:38 +0000   Mon, 09 Dec 2024 22:54:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 09 Dec 2024 22:52:38 +0000   Mon, 09 Dec 2024 22:54:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    ha-920193-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 418684ffa8244b8180cf28f3a347b4c2
	  System UUID:                418684ff-a824-4b81-80cf-28f3a347b4c2
	  Boot ID:                    15131626-aa5d-4727-aedd-7039ff10fa6a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rkqdv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-920193-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m24s
	  kube-system                 kindnet-7bbbc                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m26s
	  kube-system                 kube-apiserver-ha-920193-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-controller-manager-ha-920193-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-proxy-lntbt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-scheduler-ha-920193-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-vip-ha-920193-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m24s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m26s (x8 over 5m27s)  kubelet          Node ha-920193-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m26s (x8 over 5m27s)  kubelet          Node ha-920193-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m26s (x7 over 5m27s)  kubelet          Node ha-920193-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-920193-m02 event: Registered Node ha-920193-m02 in Controller
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-920193-m02 event: Registered Node ha-920193-m02 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-920193-m02 event: Registered Node ha-920193-m02 in Controller
	  Normal  NodeNotReady             110s                   node-controller  Node ha-920193-m02 status is now: NodeNotReady
	
	
	Name:               ha-920193-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-920193-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=ha-920193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T22_51_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 22:51:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-920193-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 22:55:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 22:52:50 +0000   Mon, 09 Dec 2024 22:51:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 22:52:50 +0000   Mon, 09 Dec 2024 22:51:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 22:52:50 +0000   Mon, 09 Dec 2024 22:51:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 22:52:50 +0000   Mon, 09 Dec 2024 22:52:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.45
	  Hostname:    ha-920193-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c09ac2bcafe5487187b79c07f4dd9720
	  System UUID:                c09ac2bc-afe5-4871-87b7-9c07f4dd9720
	  Boot ID:                    1fbc2da5-2f05-4c65-92cc-ea55dc184e77
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zshqx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-920193-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m11s
	  kube-system                 kindnet-drj9q                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m13s
	  kube-system                 kube-apiserver-ha-920193-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-controller-manager-ha-920193-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-proxy-pr7zk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-scheduler-ha-920193-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-vip-ha-920193-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m9s                   kube-proxy       
	  Normal  CIDRAssignmentFailed     4m13s                  cidrAllocator    Node ha-920193-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m13s (x8 over 4m13s)  kubelet          Node ha-920193-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m13s (x8 over 4m13s)  kubelet          Node ha-920193-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m13s (x7 over 4m13s)  kubelet          Node ha-920193-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-920193-m03 event: Registered Node ha-920193-m03 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-920193-m03 event: Registered Node ha-920193-m03 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-920193-m03 event: Registered Node ha-920193-m03 in Controller
	
	
	Name:               ha-920193-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-920193-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=ha-920193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T22_52_56_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 22:52:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-920193-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 22:55:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 22:53:25 +0000   Mon, 09 Dec 2024 22:52:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 22:53:25 +0000   Mon, 09 Dec 2024 22:52:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 22:53:25 +0000   Mon, 09 Dec 2024 22:52:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 22:53:25 +0000   Mon, 09 Dec 2024 22:53:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.98
	  Hostname:    ha-920193-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4a2dbc042e3045febd5c0c9d1b2c22ec
	  System UUID:                4a2dbc04-2e30-45fe-bd5c-0c9d1b2c22ec
	  Boot ID:                    1261e6c2-362c-4edd-9457-2b833cda280a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4pzwv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m7s
	  kube-system                 kube-proxy-7d45n    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m1s                 kube-proxy       
	  Normal  CIDRAssignmentFailed     3m7s                 cidrAllocator    Node ha-920193-m04 status is now: CIDRAssignmentFailed
	  Normal  Starting                 3m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m7s (x2 over 3m7s)  kubelet          Node ha-920193-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x2 over 3m7s)  kubelet          Node ha-920193-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x2 over 3m7s)  kubelet          Node ha-920193-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m5s                 node-controller  Node ha-920193-m04 event: Registered Node ha-920193-m04 in Controller
	  Normal  RegisteredNode           3m4s                 node-controller  Node ha-920193-m04 event: Registered Node ha-920193-m04 in Controller
	  Normal  RegisteredNode           3m4s                 node-controller  Node ha-920193-m04 event: Registered Node ha-920193-m04 in Controller
	  Normal  NodeReady                2m46s                kubelet          Node ha-920193-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 9 22:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049320] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036387] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.848109] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.938823] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.563382] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.738770] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.057878] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055312] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.165760] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.148687] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.252407] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.807769] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +4.142269] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.067556] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.253709] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.082838] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.454038] kauditd_printk_skb: 21 callbacks suppressed
	[Dec 9 22:50] kauditd_printk_skb: 40 callbacks suppressed
	[ +36.675272] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9] <==
	{"level":"warn","ts":"2024-12-09T22:56:02.607541Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.614132Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.617228Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.626541Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.635606Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.636567Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.648723Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.655977Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.656215Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.666000Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.675907Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.688030Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.700947Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.708843Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.711999Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.714956Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.719981Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.725543Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.731060Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.734353Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.736972Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.740078Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.745108Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.748992Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:02.751266Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 22:56:02 up 6 min,  0 users,  load average: 0.43, 0.27, 0.13
	Linux ha-920193 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a] <==
	I1209 22:55:24.240338       1 main.go:324] Node ha-920193-m03 has CIDR [10.244.2.0/24] 
	I1209 22:55:34.244268       1 main.go:297] Handling node with IPs: map[192.168.39.98:{}]
	I1209 22:55:34.244372       1 main.go:324] Node ha-920193-m04 has CIDR [10.244.3.0/24] 
	I1209 22:55:34.244633       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1209 22:55:34.244725       1 main.go:301] handling current node
	I1209 22:55:34.244752       1 main.go:297] Handling node with IPs: map[192.168.39.43:{}]
	I1209 22:55:34.244770       1 main.go:324] Node ha-920193-m02 has CIDR [10.244.1.0/24] 
	I1209 22:55:34.244900       1 main.go:297] Handling node with IPs: map[192.168.39.45:{}]
	I1209 22:55:34.244924       1 main.go:324] Node ha-920193-m03 has CIDR [10.244.2.0/24] 
	I1209 22:55:44.241125       1 main.go:297] Handling node with IPs: map[192.168.39.45:{}]
	I1209 22:55:44.241179       1 main.go:324] Node ha-920193-m03 has CIDR [10.244.2.0/24] 
	I1209 22:55:44.241517       1 main.go:297] Handling node with IPs: map[192.168.39.98:{}]
	I1209 22:55:44.241554       1 main.go:324] Node ha-920193-m04 has CIDR [10.244.3.0/24] 
	I1209 22:55:44.242208       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1209 22:55:44.242246       1 main.go:301] handling current node
	I1209 22:55:44.242264       1 main.go:297] Handling node with IPs: map[192.168.39.43:{}]
	I1209 22:55:44.242279       1 main.go:324] Node ha-920193-m02 has CIDR [10.244.1.0/24] 
	I1209 22:55:54.237055       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1209 22:55:54.237098       1 main.go:301] handling current node
	I1209 22:55:54.237112       1 main.go:297] Handling node with IPs: map[192.168.39.43:{}]
	I1209 22:55:54.237117       1 main.go:324] Node ha-920193-m02 has CIDR [10.244.1.0/24] 
	I1209 22:55:54.237320       1 main.go:297] Handling node with IPs: map[192.168.39.45:{}]
	I1209 22:55:54.237342       1 main.go:324] Node ha-920193-m03 has CIDR [10.244.2.0/24] 
	I1209 22:55:54.237447       1 main.go:297] Handling node with IPs: map[192.168.39.98:{}]
	I1209 22:55:54.237463       1 main.go:324] Node ha-920193-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581] <==
	W1209 22:49:43.150982       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102]
	I1209 22:49:43.152002       1 controller.go:615] quota admission added evaluator for: endpoints
	I1209 22:49:43.156330       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 22:49:43.387632       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1209 22:49:44.564732       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1209 22:49:44.579130       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1209 22:49:44.588831       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1209 22:49:48.591895       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1209 22:49:48.841334       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1209 22:52:22.354256       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36040: use of closed network connection
	E1209 22:52:22.536970       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36064: use of closed network connection
	E1209 22:52:22.712523       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36088: use of closed network connection
	E1209 22:52:22.898417       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36102: use of closed network connection
	E1209 22:52:23.071122       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36126: use of closed network connection
	E1209 22:52:23.250546       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36138: use of closed network connection
	E1209 22:52:23.423505       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36152: use of closed network connection
	E1209 22:52:23.596493       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36174: use of closed network connection
	E1209 22:52:23.770267       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36200: use of closed network connection
	E1209 22:52:24.059362       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36220: use of closed network connection
	E1209 22:52:24.222108       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36234: use of closed network connection
	E1209 22:52:24.394542       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36254: use of closed network connection
	E1209 22:52:24.570825       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36280: use of closed network connection
	E1209 22:52:24.742045       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36308: use of closed network connection
	E1209 22:52:24.918566       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36330: use of closed network connection
	W1209 22:53:53.164722       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.45]
	
	
	==> kube-controller-manager [f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a] <==
	I1209 22:52:55.696316       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	E1209 22:52:55.827513       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"d21ce5c2-c9ae-46d3-8e56-962d14b633c9\", ResourceVersion:\"913\", Generation:1, CreationTimestamp:time.Date(2024, time.December, 9, 22, 49, 45, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\\\
",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20241108-5c6d2daf\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\\
\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00247f6a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\
"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0026282e8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolume
ClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002628300), EmptyDir:(*v1.EmptyDirVolumeSource)
(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.Portworx
VolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002628318), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), Az
ureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20241108-5c6d2daf\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc00247f6c0)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarS
ource)(0xc00247f700)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:fals
e, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc00298a060), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralCont
ainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc002895a00), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002509e80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), O
verhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0027a7a80)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002895a3c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1209 22:52:55.828552       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"6fe45e3d-72f3-4c58-8284-ee89d6d57a36\", ResourceVersion:\"871\", Generation:1, CreationTimestamp:time.Date(2024, time.December, 9, 22, 49, 44, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00197c7a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\"
, Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)
(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00265ecc0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002193ae8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolume
Source)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVol
umeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002193b00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtual
DiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.31.2\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc00197c7e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Reso
urceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"
/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0026ee600), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc002860a98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0025a4880), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostA
lias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002693bd0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002860af0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled
on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1209 22:52:56.102815       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:57.678400       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:58.159889       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:58.160065       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-920193-m04"
	I1209 22:52:58.180925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:58.828069       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:58.908919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:05.805409       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:16.012967       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-920193-m04"
	I1209 22:53:16.013430       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:16.029012       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:17.646042       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:25.994489       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:54:12.667473       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-920193-m04"
	I1209 22:54:12.668375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m02"
	I1209 22:54:12.690072       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m02"
	I1209 22:54:12.722935       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.821273ms"
	I1209 22:54:12.724268       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.814µs"
	I1209 22:54:13.270393       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m02"
	I1209 22:54:17.915983       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m02"
	
	
	==> kube-proxy [233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 22:49:50.258403       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 22:49:50.274620       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	E1209 22:49:50.274749       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 22:49:50.309286       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 22:49:50.309340       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 22:49:50.309367       1 server_linux.go:169] "Using iptables Proxier"
	I1209 22:49:50.311514       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 22:49:50.312044       1 server.go:483] "Version info" version="v1.31.2"
	I1209 22:49:50.312073       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 22:49:50.314372       1 config.go:199] "Starting service config controller"
	I1209 22:49:50.314401       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 22:49:50.314584       1 config.go:105] "Starting endpoint slice config controller"
	I1209 22:49:50.314607       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 22:49:50.315221       1 config.go:328] "Starting node config controller"
	I1209 22:49:50.315250       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 22:49:50.415190       1 shared_informer.go:320] Caches are synced for service config
	I1209 22:49:50.415151       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 22:49:50.415308       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963] <==
	W1209 22:49:42.622383       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 22:49:42.622920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 22:49:42.673980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1209 22:49:42.674373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 22:49:42.700294       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 22:49:42.700789       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1209 22:49:44.393323       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1209 22:52:18.167059       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rkqdv\": pod busybox-7dff88458-rkqdv is already assigned to node \"ha-920193-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rkqdv" node="ha-920193-m02"
	E1209 22:52:18.167170       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c1517f25-fc19-4255-b4c6-9a02511b80c3(default/busybox-7dff88458-rkqdv) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-rkqdv"
	E1209 22:52:18.167196       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rkqdv\": pod busybox-7dff88458-rkqdv is already assigned to node \"ha-920193-m02\"" pod="default/busybox-7dff88458-rkqdv"
	I1209 22:52:18.167215       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rkqdv" node="ha-920193-m02"
	E1209 22:52:55.621239       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x5mqb\": pod kindnet-x5mqb is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-x5mqb" node="ha-920193-m04"
	E1209 22:52:55.621341       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x5mqb\": pod kindnet-x5mqb is already assigned to node \"ha-920193-m04\"" pod="kube-system/kindnet-x5mqb"
	E1209 22:52:55.648021       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-k5v9w\": pod kube-proxy-k5v9w is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-k5v9w" node="ha-920193-m04"
	E1209 22:52:55.648095       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5882629a-a929-45e4-b026-e75a2c17d56d(kube-system/kube-proxy-k5v9w) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-k5v9w"
	E1209 22:52:55.648113       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-k5v9w\": pod kube-proxy-k5v9w is already assigned to node \"ha-920193-m04\"" pod="kube-system/kube-proxy-k5v9w"
	I1209 22:52:55.648138       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-k5v9w" node="ha-920193-m04"
	E1209 22:52:55.758943       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mp7q7\": pod kube-proxy-mp7q7 is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mp7q7" node="ha-920193-m04"
	E1209 22:52:55.759080       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a4d32bae-6ec6-4338-8689-3b32518b021b(kube-system/kube-proxy-mp7q7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-mp7q7"
	E1209 22:52:55.759142       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mp7q7\": pod kube-proxy-mp7q7 is already assigned to node \"ha-920193-m04\"" pod="kube-system/kube-proxy-mp7q7"
	I1209 22:52:55.759188       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-mp7q7" node="ha-920193-m04"
	E1209 22:52:55.775999       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7d45n\": pod kube-proxy-7d45n is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7d45n" node="ha-920193-m04"
	E1209 22:52:55.776095       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7d45n\": pod kube-proxy-7d45n is already assigned to node \"ha-920193-m04\"" pod="kube-system/kube-proxy-7d45n"
	E1209 22:52:55.784854       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4pzwv\": pod kindnet-4pzwv is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4pzwv" node="ha-920193-m04"
	E1209 22:52:55.785146       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4pzwv\": pod kindnet-4pzwv is already assigned to node \"ha-920193-m04\"" pod="kube-system/kindnet-4pzwv"
	
	
	==> kubelet <==
	Dec 09 22:54:44 ha-920193 kubelet[1302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 22:54:44 ha-920193 kubelet[1302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 22:54:44 ha-920193 kubelet[1302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 22:54:44 ha-920193 kubelet[1302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 22:54:44 ha-920193 kubelet[1302]: E1209 22:54:44.581397    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784884581065881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:54:44 ha-920193 kubelet[1302]: E1209 22:54:44.581439    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784884581065881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:54:54 ha-920193 kubelet[1302]: E1209 22:54:54.583096    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784894582573620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:54:54 ha-920193 kubelet[1302]: E1209 22:54:54.583476    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784894582573620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:04 ha-920193 kubelet[1302]: E1209 22:55:04.587043    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784904586404563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:04 ha-920193 kubelet[1302]: E1209 22:55:04.587520    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784904586404563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:14 ha-920193 kubelet[1302]: E1209 22:55:14.590203    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784914589554601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:14 ha-920193 kubelet[1302]: E1209 22:55:14.590522    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784914589554601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:24 ha-920193 kubelet[1302]: E1209 22:55:24.593898    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784924593226467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:24 ha-920193 kubelet[1302]: E1209 22:55:24.593942    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784924593226467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:34 ha-920193 kubelet[1302]: E1209 22:55:34.596079    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784934595337713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:34 ha-920193 kubelet[1302]: E1209 22:55:34.596564    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784934595337713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:44 ha-920193 kubelet[1302]: E1209 22:55:44.520346    1302 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 09 22:55:44 ha-920193 kubelet[1302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 22:55:44 ha-920193 kubelet[1302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 22:55:44 ha-920193 kubelet[1302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 22:55:44 ha-920193 kubelet[1302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 22:55:44 ha-920193 kubelet[1302]: E1209 22:55:44.598917    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784944598396332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:44 ha-920193 kubelet[1302]: E1209 22:55:44.598999    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784944598396332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:54 ha-920193 kubelet[1302]: E1209 22:55:54.601949    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784954601550343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:54 ha-920193 kubelet[1302]: E1209 22:55:54.602225    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784954601550343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-920193 -n ha-920193
helpers_test.go:261: (dbg) Run:  kubectl --context ha-920193 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr: (4.14107693s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-920193 -n ha-920193
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-920193 logs -n 25: (1.306136259s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193:/home/docker/cp-test_ha-920193-m03_ha-920193.txt                       |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193 sudo cat                                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m03_ha-920193.txt                                 |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m02:/home/docker/cp-test_ha-920193-m03_ha-920193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m02 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m03_ha-920193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04:/home/docker/cp-test_ha-920193-m03_ha-920193-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m04 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m03_ha-920193-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-920193 cp testdata/cp-test.txt                                                | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3609340860/001/cp-test_ha-920193-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193:/home/docker/cp-test_ha-920193-m04_ha-920193.txt                       |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193 sudo cat                                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m04_ha-920193.txt                                 |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m02:/home/docker/cp-test_ha-920193-m04_ha-920193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m02 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m04_ha-920193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03:/home/docker/cp-test_ha-920193-m04_ha-920193-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m03 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m04_ha-920193-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-920193 node stop m02 -v=7                                                     | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-920193 node start m02 -v=7                                                    | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 22:49:03
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 22:49:03.145250   36778 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:49:03.145390   36778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:49:03.145399   36778 out.go:358] Setting ErrFile to fd 2...
	I1209 22:49:03.145404   36778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:49:03.145610   36778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 22:49:03.146205   36778 out.go:352] Setting JSON to false
	I1209 22:49:03.147113   36778 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5494,"bootTime":1733779049,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 22:49:03.147209   36778 start.go:139] virtualization: kvm guest
	I1209 22:49:03.149227   36778 out.go:177] * [ha-920193] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 22:49:03.150446   36778 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 22:49:03.150468   36778 notify.go:220] Checking for updates...
	I1209 22:49:03.152730   36778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 22:49:03.153842   36778 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:49:03.154957   36778 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:03.156087   36778 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 22:49:03.157179   36778 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 22:49:03.158417   36778 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 22:49:03.193867   36778 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 22:49:03.195030   36778 start.go:297] selected driver: kvm2
	I1209 22:49:03.195046   36778 start.go:901] validating driver "kvm2" against <nil>
	I1209 22:49:03.195060   36778 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 22:49:03.196334   36778 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:49:03.196484   36778 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 22:49:03.213595   36778 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 22:49:03.213648   36778 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 22:49:03.213994   36778 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 22:49:03.214030   36778 cni.go:84] Creating CNI manager for ""
	I1209 22:49:03.214072   36778 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1209 22:49:03.214085   36778 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 22:49:03.214141   36778 start.go:340] cluster config:
	{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1209 22:49:03.214261   36778 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:49:03.215829   36778 out.go:177] * Starting "ha-920193" primary control-plane node in "ha-920193" cluster
	I1209 22:49:03.216947   36778 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:49:03.216988   36778 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 22:49:03.217002   36778 cache.go:56] Caching tarball of preloaded images
	I1209 22:49:03.217077   36778 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 22:49:03.217091   36778 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 22:49:03.217507   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:49:03.217534   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json: {Name:mk69f8481a2f9361b3b46196caa6653a8d77a9fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:03.217729   36778 start.go:360] acquireMachinesLock for ha-920193: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 22:49:03.217779   36778 start.go:364] duration metric: took 30.111µs to acquireMachinesLock for "ha-920193"
	I1209 22:49:03.217805   36778 start.go:93] Provisioning new machine with config: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:49:03.217887   36778 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 22:49:03.219504   36778 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 22:49:03.219675   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:03.219709   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:03.234776   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39205
	I1209 22:49:03.235235   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:03.235843   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:03.235867   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:03.236261   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:03.236466   36778 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:49:03.236632   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:03.236794   36778 start.go:159] libmachine.API.Create for "ha-920193" (driver="kvm2")
	I1209 22:49:03.236821   36778 client.go:168] LocalClient.Create starting
	I1209 22:49:03.236862   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem
	I1209 22:49:03.236900   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:49:03.236922   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:49:03.237001   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem
	I1209 22:49:03.237033   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:49:03.237054   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:49:03.237078   36778 main.go:141] libmachine: Running pre-create checks...
	I1209 22:49:03.237090   36778 main.go:141] libmachine: (ha-920193) Calling .PreCreateCheck
	I1209 22:49:03.237426   36778 main.go:141] libmachine: (ha-920193) Calling .GetConfigRaw
	I1209 22:49:03.237793   36778 main.go:141] libmachine: Creating machine...
	I1209 22:49:03.237806   36778 main.go:141] libmachine: (ha-920193) Calling .Create
	I1209 22:49:03.237934   36778 main.go:141] libmachine: (ha-920193) Creating KVM machine...
	I1209 22:49:03.239483   36778 main.go:141] libmachine: (ha-920193) DBG | found existing default KVM network
	I1209 22:49:03.240340   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.240142   36801 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I1209 22:49:03.240365   36778 main.go:141] libmachine: (ha-920193) DBG | created network xml: 
	I1209 22:49:03.240393   36778 main.go:141] libmachine: (ha-920193) DBG | <network>
	I1209 22:49:03.240407   36778 main.go:141] libmachine: (ha-920193) DBG |   <name>mk-ha-920193</name>
	I1209 22:49:03.240417   36778 main.go:141] libmachine: (ha-920193) DBG |   <dns enable='no'/>
	I1209 22:49:03.240427   36778 main.go:141] libmachine: (ha-920193) DBG |   
	I1209 22:49:03.240438   36778 main.go:141] libmachine: (ha-920193) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1209 22:49:03.240454   36778 main.go:141] libmachine: (ha-920193) DBG |     <dhcp>
	I1209 22:49:03.240491   36778 main.go:141] libmachine: (ha-920193) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1209 22:49:03.240508   36778 main.go:141] libmachine: (ha-920193) DBG |     </dhcp>
	I1209 22:49:03.240522   36778 main.go:141] libmachine: (ha-920193) DBG |   </ip>
	I1209 22:49:03.240532   36778 main.go:141] libmachine: (ha-920193) DBG |   
	I1209 22:49:03.240542   36778 main.go:141] libmachine: (ha-920193) DBG | </network>
	I1209 22:49:03.240557   36778 main.go:141] libmachine: (ha-920193) DBG | 
	I1209 22:49:03.245903   36778 main.go:141] libmachine: (ha-920193) DBG | trying to create private KVM network mk-ha-920193 192.168.39.0/24...
	I1209 22:49:03.312870   36778 main.go:141] libmachine: (ha-920193) DBG | private KVM network mk-ha-920193 192.168.39.0/24 created
	I1209 22:49:03.312901   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.312803   36801 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:03.312925   36778 main.go:141] libmachine: (ha-920193) Setting up store path in /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193 ...
	I1209 22:49:03.312938   36778 main.go:141] libmachine: (ha-920193) Building disk image from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 22:49:03.312960   36778 main.go:141] libmachine: (ha-920193) Downloading /home/jenkins/minikube-integration/19888-18950/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 22:49:03.559720   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.559511   36801 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa...
	I1209 22:49:03.632777   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.632628   36801 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/ha-920193.rawdisk...
	I1209 22:49:03.632808   36778 main.go:141] libmachine: (ha-920193) DBG | Writing magic tar header
	I1209 22:49:03.632868   36778 main.go:141] libmachine: (ha-920193) DBG | Writing SSH key tar header
	I1209 22:49:03.632897   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.632735   36801 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193 ...
	I1209 22:49:03.632914   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193 (perms=drwx------)
	I1209 22:49:03.632931   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines (perms=drwxr-xr-x)
	I1209 22:49:03.632938   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube (perms=drwxr-xr-x)
	I1209 22:49:03.632951   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950 (perms=drwxrwxr-x)
	I1209 22:49:03.632959   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 22:49:03.632968   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 22:49:03.632988   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193
	I1209 22:49:03.632996   36778 main.go:141] libmachine: (ha-920193) Creating domain...
	I1209 22:49:03.633013   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines
	I1209 22:49:03.633026   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:03.633034   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950
	I1209 22:49:03.633039   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 22:49:03.633046   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins
	I1209 22:49:03.633051   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home
	I1209 22:49:03.633058   36778 main.go:141] libmachine: (ha-920193) DBG | Skipping /home - not owner
	I1209 22:49:03.634033   36778 main.go:141] libmachine: (ha-920193) define libvirt domain using xml: 
	I1209 22:49:03.634053   36778 main.go:141] libmachine: (ha-920193) <domain type='kvm'>
	I1209 22:49:03.634063   36778 main.go:141] libmachine: (ha-920193)   <name>ha-920193</name>
	I1209 22:49:03.634077   36778 main.go:141] libmachine: (ha-920193)   <memory unit='MiB'>2200</memory>
	I1209 22:49:03.634087   36778 main.go:141] libmachine: (ha-920193)   <vcpu>2</vcpu>
	I1209 22:49:03.634099   36778 main.go:141] libmachine: (ha-920193)   <features>
	I1209 22:49:03.634108   36778 main.go:141] libmachine: (ha-920193)     <acpi/>
	I1209 22:49:03.634117   36778 main.go:141] libmachine: (ha-920193)     <apic/>
	I1209 22:49:03.634126   36778 main.go:141] libmachine: (ha-920193)     <pae/>
	I1209 22:49:03.634143   36778 main.go:141] libmachine: (ha-920193)     
	I1209 22:49:03.634155   36778 main.go:141] libmachine: (ha-920193)   </features>
	I1209 22:49:03.634163   36778 main.go:141] libmachine: (ha-920193)   <cpu mode='host-passthrough'>
	I1209 22:49:03.634172   36778 main.go:141] libmachine: (ha-920193)   
	I1209 22:49:03.634184   36778 main.go:141] libmachine: (ha-920193)   </cpu>
	I1209 22:49:03.634192   36778 main.go:141] libmachine: (ha-920193)   <os>
	I1209 22:49:03.634200   36778 main.go:141] libmachine: (ha-920193)     <type>hvm</type>
	I1209 22:49:03.634209   36778 main.go:141] libmachine: (ha-920193)     <boot dev='cdrom'/>
	I1209 22:49:03.634217   36778 main.go:141] libmachine: (ha-920193)     <boot dev='hd'/>
	I1209 22:49:03.634226   36778 main.go:141] libmachine: (ha-920193)     <bootmenu enable='no'/>
	I1209 22:49:03.634233   36778 main.go:141] libmachine: (ha-920193)   </os>
	I1209 22:49:03.634241   36778 main.go:141] libmachine: (ha-920193)   <devices>
	I1209 22:49:03.634250   36778 main.go:141] libmachine: (ha-920193)     <disk type='file' device='cdrom'>
	I1209 22:49:03.634279   36778 main.go:141] libmachine: (ha-920193)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/boot2docker.iso'/>
	I1209 22:49:03.634301   36778 main.go:141] libmachine: (ha-920193)       <target dev='hdc' bus='scsi'/>
	I1209 22:49:03.634316   36778 main.go:141] libmachine: (ha-920193)       <readonly/>
	I1209 22:49:03.634323   36778 main.go:141] libmachine: (ha-920193)     </disk>
	I1209 22:49:03.634332   36778 main.go:141] libmachine: (ha-920193)     <disk type='file' device='disk'>
	I1209 22:49:03.634344   36778 main.go:141] libmachine: (ha-920193)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 22:49:03.634359   36778 main.go:141] libmachine: (ha-920193)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/ha-920193.rawdisk'/>
	I1209 22:49:03.634367   36778 main.go:141] libmachine: (ha-920193)       <target dev='hda' bus='virtio'/>
	I1209 22:49:03.634375   36778 main.go:141] libmachine: (ha-920193)     </disk>
	I1209 22:49:03.634383   36778 main.go:141] libmachine: (ha-920193)     <interface type='network'>
	I1209 22:49:03.634391   36778 main.go:141] libmachine: (ha-920193)       <source network='mk-ha-920193'/>
	I1209 22:49:03.634409   36778 main.go:141] libmachine: (ha-920193)       <model type='virtio'/>
	I1209 22:49:03.634421   36778 main.go:141] libmachine: (ha-920193)     </interface>
	I1209 22:49:03.634431   36778 main.go:141] libmachine: (ha-920193)     <interface type='network'>
	I1209 22:49:03.634442   36778 main.go:141] libmachine: (ha-920193)       <source network='default'/>
	I1209 22:49:03.634452   36778 main.go:141] libmachine: (ha-920193)       <model type='virtio'/>
	I1209 22:49:03.634463   36778 main.go:141] libmachine: (ha-920193)     </interface>
	I1209 22:49:03.634473   36778 main.go:141] libmachine: (ha-920193)     <serial type='pty'>
	I1209 22:49:03.634484   36778 main.go:141] libmachine: (ha-920193)       <target port='0'/>
	I1209 22:49:03.634498   36778 main.go:141] libmachine: (ha-920193)     </serial>
	I1209 22:49:03.634535   36778 main.go:141] libmachine: (ha-920193)     <console type='pty'>
	I1209 22:49:03.634561   36778 main.go:141] libmachine: (ha-920193)       <target type='serial' port='0'/>
	I1209 22:49:03.634581   36778 main.go:141] libmachine: (ha-920193)     </console>
	I1209 22:49:03.634592   36778 main.go:141] libmachine: (ha-920193)     <rng model='virtio'>
	I1209 22:49:03.634601   36778 main.go:141] libmachine: (ha-920193)       <backend model='random'>/dev/random</backend>
	I1209 22:49:03.634611   36778 main.go:141] libmachine: (ha-920193)     </rng>
	I1209 22:49:03.634621   36778 main.go:141] libmachine: (ha-920193)     
	I1209 22:49:03.634629   36778 main.go:141] libmachine: (ha-920193)     
	I1209 22:49:03.634634   36778 main.go:141] libmachine: (ha-920193)   </devices>
	I1209 22:49:03.634641   36778 main.go:141] libmachine: (ha-920193) </domain>
	I1209 22:49:03.634660   36778 main.go:141] libmachine: (ha-920193) 
	I1209 22:49:03.638977   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:88:5b:26 in network default
	I1209 22:49:03.639478   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:03.639517   36778 main.go:141] libmachine: (ha-920193) Ensuring networks are active...
	I1209 22:49:03.640151   36778 main.go:141] libmachine: (ha-920193) Ensuring network default is active
	I1209 22:49:03.640468   36778 main.go:141] libmachine: (ha-920193) Ensuring network mk-ha-920193 is active
	I1209 22:49:03.640970   36778 main.go:141] libmachine: (ha-920193) Getting domain xml...
	I1209 22:49:03.641682   36778 main.go:141] libmachine: (ha-920193) Creating domain...
	I1209 22:49:04.829698   36778 main.go:141] libmachine: (ha-920193) Waiting to get IP...
	I1209 22:49:04.830434   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:04.830835   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:04.830867   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:04.830824   36801 retry.go:31] will retry after 207.081791ms: waiting for machine to come up
	I1209 22:49:05.039144   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:05.039519   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:05.039585   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:05.039471   36801 retry.go:31] will retry after 281.967291ms: waiting for machine to come up
	I1209 22:49:05.322964   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:05.323366   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:05.323382   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:05.323322   36801 retry.go:31] will retry after 481.505756ms: waiting for machine to come up
	I1209 22:49:05.805961   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:05.806356   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:05.806376   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:05.806314   36801 retry.go:31] will retry after 549.592497ms: waiting for machine to come up
	I1209 22:49:06.357773   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:06.358284   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:06.358319   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:06.358243   36801 retry.go:31] will retry after 535.906392ms: waiting for machine to come up
	I1209 22:49:06.896232   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:06.896608   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:06.896631   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:06.896560   36801 retry.go:31] will retry after 874.489459ms: waiting for machine to come up
	I1209 22:49:07.772350   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:07.772754   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:07.772787   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:07.772706   36801 retry.go:31] will retry after 1.162571844s: waiting for machine to come up
	I1209 22:49:08.936520   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:08.936889   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:08.936917   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:08.936873   36801 retry.go:31] will retry after 1.45755084s: waiting for machine to come up
	I1209 22:49:10.396453   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:10.396871   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:10.396892   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:10.396843   36801 retry.go:31] will retry after 1.609479332s: waiting for machine to come up
	I1209 22:49:12.008693   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:12.009140   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:12.009166   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:12.009087   36801 retry.go:31] will retry after 2.268363531s: waiting for machine to come up
	I1209 22:49:14.279389   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:14.279856   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:14.279912   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:14.279851   36801 retry.go:31] will retry after 2.675009942s: waiting for machine to come up
	I1209 22:49:16.957696   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:16.958066   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:16.958096   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:16.958013   36801 retry.go:31] will retry after 2.665510056s: waiting for machine to come up
	I1209 22:49:19.624784   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:19.625187   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:19.625202   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:19.625166   36801 retry.go:31] will retry after 2.857667417s: waiting for machine to come up
	I1209 22:49:22.486137   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:22.486540   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:22.486563   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:22.486493   36801 retry.go:31] will retry after 4.026256687s: waiting for machine to come up
	I1209 22:49:26.516409   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.516832   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has current primary IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.516858   36778 main.go:141] libmachine: (ha-920193) Found IP for machine: 192.168.39.102
	I1209 22:49:26.516892   36778 main.go:141] libmachine: (ha-920193) Reserving static IP address...
	I1209 22:49:26.517220   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find host DHCP lease matching {name: "ha-920193", mac: "52:54:00:eb:3c:cb", ip: "192.168.39.102"} in network mk-ha-920193
	I1209 22:49:26.587512   36778 main.go:141] libmachine: (ha-920193) DBG | Getting to WaitForSSH function...
	I1209 22:49:26.587538   36778 main.go:141] libmachine: (ha-920193) Reserved static IP address: 192.168.39.102
	I1209 22:49:26.587551   36778 main.go:141] libmachine: (ha-920193) Waiting for SSH to be available...
	I1209 22:49:26.589724   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.590056   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:minikube Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:26.590080   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.590252   36778 main.go:141] libmachine: (ha-920193) DBG | Using SSH client type: external
	I1209 22:49:26.590281   36778 main.go:141] libmachine: (ha-920193) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa (-rw-------)
	I1209 22:49:26.590312   36778 main.go:141] libmachine: (ha-920193) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 22:49:26.590335   36778 main.go:141] libmachine: (ha-920193) DBG | About to run SSH command:
	I1209 22:49:26.590368   36778 main.go:141] libmachine: (ha-920193) DBG | exit 0
	I1209 22:49:26.707404   36778 main.go:141] libmachine: (ha-920193) DBG | SSH cmd err, output: <nil>: 
	I1209 22:49:26.707687   36778 main.go:141] libmachine: (ha-920193) KVM machine creation complete!
	I1209 22:49:26.708024   36778 main.go:141] libmachine: (ha-920193) Calling .GetConfigRaw
	I1209 22:49:26.708523   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:26.708739   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:26.708918   36778 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 22:49:26.708931   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:49:26.710113   36778 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 22:49:26.710125   36778 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 22:49:26.710130   36778 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 22:49:26.710135   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:26.712426   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.712765   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:26.712791   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.712925   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:26.713081   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.713185   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.713306   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:26.713452   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:26.713680   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:26.713692   36778 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 22:49:26.806695   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:49:26.806717   36778 main.go:141] libmachine: Detecting the provisioner...
	I1209 22:49:26.806725   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:26.809366   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.809767   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:26.809800   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.809958   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:26.810141   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.810311   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.810444   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:26.810627   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:26.810776   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:26.810787   36778 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 22:49:26.908040   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 22:49:26.908090   36778 main.go:141] libmachine: found compatible host: buildroot
	I1209 22:49:26.908097   36778 main.go:141] libmachine: Provisioning with buildroot...
	I1209 22:49:26.908104   36778 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:49:26.908364   36778 buildroot.go:166] provisioning hostname "ha-920193"
	I1209 22:49:26.908392   36778 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:49:26.908590   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:26.911118   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.911513   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:26.911538   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.911715   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:26.911868   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.911989   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.912100   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:26.912224   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:26.912420   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:26.912438   36778 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-920193 && echo "ha-920193" | sudo tee /etc/hostname
	I1209 22:49:27.020773   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-920193
	
	I1209 22:49:27.020799   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.023575   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.023846   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.023871   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.024029   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.024220   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.024374   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.024530   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.024691   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:27.024872   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:27.024888   36778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-920193' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-920193/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-920193' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 22:49:27.127613   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:49:27.127642   36778 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 22:49:27.127660   36778 buildroot.go:174] setting up certificates
	I1209 22:49:27.127691   36778 provision.go:84] configureAuth start
	I1209 22:49:27.127710   36778 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:49:27.127961   36778 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:49:27.130248   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.130591   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.130619   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.130738   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.132923   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.133247   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.133271   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.133422   36778 provision.go:143] copyHostCerts
	I1209 22:49:27.133461   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:49:27.133491   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 22:49:27.133506   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:49:27.133573   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 22:49:27.133653   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:49:27.133670   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 22:49:27.133677   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:49:27.133702   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 22:49:27.133745   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:49:27.133761   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 22:49:27.133767   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:49:27.133788   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 22:49:27.133835   36778 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.ha-920193 san=[127.0.0.1 192.168.39.102 ha-920193 localhost minikube]
	I1209 22:49:27.297434   36778 provision.go:177] copyRemoteCerts
	I1209 22:49:27.297494   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 22:49:27.297515   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.300069   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.300424   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.300443   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.300615   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.300792   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.300928   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.301029   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:27.378773   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 22:49:27.378830   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1209 22:49:27.403553   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 22:49:27.403627   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 22:49:27.425459   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 22:49:27.425526   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 22:49:27.449197   36778 provision.go:87] duration metric: took 321.487984ms to configureAuth
	I1209 22:49:27.449229   36778 buildroot.go:189] setting minikube options for container-runtime
	I1209 22:49:27.449449   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:49:27.449534   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.453191   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.453559   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.453595   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.453759   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.453939   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.454070   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.454184   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.454317   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:27.454513   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:27.454534   36778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 22:49:27.653703   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 22:49:27.653733   36778 main.go:141] libmachine: Checking connection to Docker...
	I1209 22:49:27.653756   36778 main.go:141] libmachine: (ha-920193) Calling .GetURL
	I1209 22:49:27.655032   36778 main.go:141] libmachine: (ha-920193) DBG | Using libvirt version 6000000
	I1209 22:49:27.657160   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.657463   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.657491   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.657682   36778 main.go:141] libmachine: Docker is up and running!
	I1209 22:49:27.657699   36778 main.go:141] libmachine: Reticulating splines...
	I1209 22:49:27.657708   36778 client.go:171] duration metric: took 24.420875377s to LocalClient.Create
	I1209 22:49:27.657735   36778 start.go:167] duration metric: took 24.420942176s to libmachine.API.Create "ha-920193"
	I1209 22:49:27.657747   36778 start.go:293] postStartSetup for "ha-920193" (driver="kvm2")
	I1209 22:49:27.657761   36778 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 22:49:27.657785   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.657983   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 22:49:27.658006   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.659917   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.660172   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.660200   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.660370   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.660519   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.660646   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.660782   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:27.737935   36778 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 22:49:27.741969   36778 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 22:49:27.741998   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 22:49:27.742081   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 22:49:27.742178   36778 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 22:49:27.742190   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /etc/ssl/certs/262532.pem
	I1209 22:49:27.742316   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 22:49:27.752769   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:49:27.776187   36778 start.go:296] duration metric: took 118.424893ms for postStartSetup
	I1209 22:49:27.776233   36778 main.go:141] libmachine: (ha-920193) Calling .GetConfigRaw
	I1209 22:49:27.776813   36778 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:49:27.779433   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.779777   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.779809   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.780018   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:49:27.780196   36778 start.go:128] duration metric: took 24.562298059s to createHost
	I1209 22:49:27.780219   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.782389   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.782713   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.782737   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.782928   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.783093   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.783255   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.783378   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.783531   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:27.783762   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:27.783780   36778 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 22:49:27.880035   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733784567.857266275
	
	I1209 22:49:27.880058   36778 fix.go:216] guest clock: 1733784567.857266275
	I1209 22:49:27.880065   36778 fix.go:229] Guest: 2024-12-09 22:49:27.857266275 +0000 UTC Remote: 2024-12-09 22:49:27.780207864 +0000 UTC m=+24.672894470 (delta=77.058411ms)
	I1209 22:49:27.880084   36778 fix.go:200] guest clock delta is within tolerance: 77.058411ms
	I1209 22:49:27.880088   36778 start.go:83] releasing machines lock for "ha-920193", held for 24.662297943s
	I1209 22:49:27.880110   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.880381   36778 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:49:27.883090   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.883418   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.883452   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.883630   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.884081   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.884211   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.884272   36778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 22:49:27.884329   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.884381   36778 ssh_runner.go:195] Run: cat /version.json
	I1209 22:49:27.884403   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.886622   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.886872   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.886899   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.886994   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.887039   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.887207   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.887321   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.887333   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.887353   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.887479   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:27.887529   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.887692   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.887829   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.887976   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:27.963462   36778 ssh_runner.go:195] Run: systemctl --version
	I1209 22:49:27.986028   36778 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 22:49:28.143161   36778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 22:49:28.149221   36778 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 22:49:28.149289   36778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 22:49:28.165410   36778 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 22:49:28.165442   36778 start.go:495] detecting cgroup driver to use...
	I1209 22:49:28.165509   36778 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 22:49:28.181384   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 22:49:28.195011   36778 docker.go:217] disabling cri-docker service (if available) ...
	I1209 22:49:28.195063   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 22:49:28.208554   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 22:49:28.222230   36778 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 22:49:28.338093   36778 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 22:49:28.483809   36778 docker.go:233] disabling docker service ...
	I1209 22:49:28.483868   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 22:49:28.497723   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 22:49:28.510133   36778 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 22:49:28.637703   36778 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 22:49:28.768621   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 22:49:28.781961   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 22:49:28.799140   36778 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 22:49:28.799205   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.808634   36778 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 22:49:28.808697   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.818355   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.827780   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.837191   36778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 22:49:28.846758   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.856291   36778 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.872403   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.881716   36778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 22:49:28.890298   36778 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 22:49:28.890355   36778 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 22:49:28.902738   36778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 22:49:28.911729   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:49:29.013922   36778 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 22:49:29.106638   36778 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 22:49:29.106719   36778 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 22:49:29.111193   36778 start.go:563] Will wait 60s for crictl version
	I1209 22:49:29.111261   36778 ssh_runner.go:195] Run: which crictl
	I1209 22:49:29.115298   36778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 22:49:29.151109   36778 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 22:49:29.151178   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:49:29.178245   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:49:29.206246   36778 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 22:49:29.207478   36778 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:49:29.209787   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:29.210134   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:29.210160   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:29.210332   36778 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 22:49:29.214243   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:49:29.226620   36778 kubeadm.go:883] updating cluster {Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 22:49:29.226723   36778 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:49:29.226766   36778 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 22:49:29.257928   36778 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 22:49:29.257999   36778 ssh_runner.go:195] Run: which lz4
	I1209 22:49:29.261848   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1209 22:49:29.261955   36778 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 22:49:29.265782   36778 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 22:49:29.265814   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 22:49:30.441006   36778 crio.go:462] duration metric: took 1.179084887s to copy over tarball
	I1209 22:49:30.441074   36778 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 22:49:32.468580   36778 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.027482243s)
	I1209 22:49:32.468624   36778 crio.go:469] duration metric: took 2.027585779s to extract the tarball
	I1209 22:49:32.468641   36778 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 22:49:32.505123   36778 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 22:49:32.547324   36778 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 22:49:32.547346   36778 cache_images.go:84] Images are preloaded, skipping loading
	I1209 22:49:32.547353   36778 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.31.2 crio true true} ...
	I1209 22:49:32.547438   36778 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-920193 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 22:49:32.547498   36778 ssh_runner.go:195] Run: crio config
	I1209 22:49:32.589945   36778 cni.go:84] Creating CNI manager for ""
	I1209 22:49:32.589970   36778 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 22:49:32.589982   36778 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 22:49:32.590011   36778 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-920193 NodeName:ha-920193 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 22:49:32.590137   36778 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-920193"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.102"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 22:49:32.590159   36778 kube-vip.go:115] generating kube-vip config ...
	I1209 22:49:32.590202   36778 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 22:49:32.605724   36778 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 22:49:32.605829   36778 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1209 22:49:32.605883   36778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 22:49:32.615285   36778 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 22:49:32.615345   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1209 22:49:32.624299   36778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1209 22:49:32.639876   36778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 22:49:32.656137   36778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1209 22:49:32.672494   36778 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1209 22:49:32.688039   36778 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 22:49:32.691843   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:49:32.703440   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:49:32.825661   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:49:32.842362   36778 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193 for IP: 192.168.39.102
	I1209 22:49:32.842387   36778 certs.go:194] generating shared ca certs ...
	I1209 22:49:32.842404   36778 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:32.842561   36778 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 22:49:32.842601   36778 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 22:49:32.842611   36778 certs.go:256] generating profile certs ...
	I1209 22:49:32.842674   36778 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key
	I1209 22:49:32.842693   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt with IP's: []
	I1209 22:49:32.980963   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt ...
	I1209 22:49:32.980992   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt: {Name:mkd9ec798303363f6538acfc05f1a5f04066e731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:32.981176   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key ...
	I1209 22:49:32.981188   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key: {Name:mk056f923a34783de09213845e376bddce6f3df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:32.981268   36778 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.ec574f19
	I1209 22:49:32.981285   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.ec574f19 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.254]
	I1209 22:49:33.242216   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.ec574f19 ...
	I1209 22:49:33.242250   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.ec574f19: {Name:mk7179026523f0b057d26b52e40a5885ad95d8ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:33.242434   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.ec574f19 ...
	I1209 22:49:33.242448   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.ec574f19: {Name:mk65609d82220269362f492c0a2d0cc4da8d1447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:33.242525   36778 certs.go:381] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.ec574f19 -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt
	I1209 22:49:33.242596   36778 certs.go:385] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.ec574f19 -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key
	I1209 22:49:33.242650   36778 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key
	I1209 22:49:33.242665   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt with IP's: []
	I1209 22:49:33.389277   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt ...
	I1209 22:49:33.389307   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt: {Name:mk8b70654b36de7093b054b1d0d39a95b39d45fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:33.389473   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key ...
	I1209 22:49:33.389485   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key: {Name:mk4ec3e3be54da03f1d1683c75f10f14c0904ad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:33.389559   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 22:49:33.389576   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 22:49:33.389587   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 22:49:33.389600   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 22:49:33.389610   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 22:49:33.389620   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 22:49:33.389632   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 22:49:33.389642   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 22:49:33.389693   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 22:49:33.389729   36778 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 22:49:33.389739   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 22:49:33.389758   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 22:49:33.389781   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 22:49:33.389801   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 22:49:33.389837   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:49:33.389863   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:49:33.389878   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem -> /usr/share/ca-certificates/26253.pem
	I1209 22:49:33.389890   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /usr/share/ca-certificates/262532.pem
	I1209 22:49:33.390445   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 22:49:33.414470   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 22:49:33.436920   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 22:49:33.458977   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 22:49:33.481846   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 22:49:33.503907   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 22:49:33.525852   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 22:49:33.548215   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 22:49:33.569802   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 22:49:33.602465   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 22:49:33.628007   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 22:49:33.653061   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 22:49:33.668632   36778 ssh_runner.go:195] Run: openssl version
	I1209 22:49:33.674257   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 22:49:33.684380   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 22:49:33.688650   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 22:49:33.688714   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 22:49:33.694036   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 22:49:33.704144   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 22:49:33.714060   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:49:33.718184   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:49:33.718227   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:49:33.723730   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 22:49:33.734203   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 22:49:33.744729   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 22:49:33.749033   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 22:49:33.749080   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 22:49:33.754563   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 22:49:33.764859   36778 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 22:49:33.768876   36778 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 22:49:33.768937   36778 kubeadm.go:392] StartCluster: {Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:49:33.769036   36778 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 22:49:33.769105   36778 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 22:49:33.804100   36778 cri.go:89] found id: ""
	I1209 22:49:33.804165   36778 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 22:49:33.814344   36778 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 22:49:33.824218   36778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 22:49:33.834084   36778 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 22:49:33.834106   36778 kubeadm.go:157] found existing configuration files:
	
	I1209 22:49:33.834157   36778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 22:49:33.843339   36778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 22:49:33.843379   36778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 22:49:33.853049   36778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 22:49:33.862222   36778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 22:49:33.862280   36778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 22:49:33.872041   36778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 22:49:33.881416   36778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 22:49:33.881475   36778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 22:49:33.891237   36778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 22:49:33.900609   36778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 22:49:33.900659   36778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 22:49:33.910089   36778 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 22:49:34.000063   36778 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 22:49:34.000183   36778 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 22:49:34.091544   36778 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 22:49:34.091739   36778 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 22:49:34.091892   36778 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 22:49:34.100090   36778 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 22:49:34.102871   36778 out.go:235]   - Generating certificates and keys ...
	I1209 22:49:34.103528   36778 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 22:49:34.103648   36778 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 22:49:34.284340   36778 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 22:49:34.462874   36778 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 22:49:34.647453   36778 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 22:49:34.787984   36778 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 22:49:35.020609   36778 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 22:49:35.020761   36778 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-920193 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I1209 22:49:35.078800   36778 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 22:49:35.078977   36778 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-920193 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I1209 22:49:35.150500   36778 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 22:49:35.230381   36778 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 22:49:35.499235   36778 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 22:49:35.499319   36778 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 22:49:35.912886   36778 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 22:49:36.241120   36778 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 22:49:36.405939   36778 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 22:49:36.604047   36778 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 22:49:36.814671   36778 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 22:49:36.815164   36778 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 22:49:36.818373   36778 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 22:49:36.820325   36778 out.go:235]   - Booting up control plane ...
	I1209 22:49:36.820430   36778 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 22:49:36.820522   36778 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 22:49:36.821468   36778 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 22:49:36.841330   36778 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 22:49:36.848308   36778 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 22:49:36.848421   36778 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 22:49:36.995410   36778 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 22:49:36.995535   36778 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 22:49:37.995683   36778 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001015441s
	I1209 22:49:37.995786   36778 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 22:49:43.754200   36778 kubeadm.go:310] [api-check] The API server is healthy after 5.761609039s
	I1209 22:49:43.767861   36778 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 22:49:43.785346   36778 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 22:49:43.810025   36778 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 22:49:43.810266   36778 kubeadm.go:310] [mark-control-plane] Marking the node ha-920193 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 22:49:43.821256   36778 kubeadm.go:310] [bootstrap-token] Using token: 72yxn0.qrsfcagkngfj4gxi
	I1209 22:49:43.822572   36778 out.go:235]   - Configuring RBAC rules ...
	I1209 22:49:43.822691   36778 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 22:49:43.832707   36778 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 22:49:43.844059   36778 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 22:49:43.846995   36778 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 22:49:43.849841   36778 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 22:49:43.856257   36778 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 22:49:44.160151   36778 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 22:49:44.591740   36778 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 22:49:45.161509   36778 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 22:49:45.162464   36778 kubeadm.go:310] 
	I1209 22:49:45.162543   36778 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 22:49:45.162552   36778 kubeadm.go:310] 
	I1209 22:49:45.162641   36778 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 22:49:45.162653   36778 kubeadm.go:310] 
	I1209 22:49:45.162689   36778 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 22:49:45.162763   36778 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 22:49:45.162845   36778 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 22:49:45.162856   36778 kubeadm.go:310] 
	I1209 22:49:45.162934   36778 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 22:49:45.162944   36778 kubeadm.go:310] 
	I1209 22:49:45.163005   36778 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 22:49:45.163016   36778 kubeadm.go:310] 
	I1209 22:49:45.163084   36778 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 22:49:45.163184   36778 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 22:49:45.163290   36778 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 22:49:45.163301   36778 kubeadm.go:310] 
	I1209 22:49:45.163412   36778 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 22:49:45.163482   36778 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 22:49:45.163488   36778 kubeadm.go:310] 
	I1209 22:49:45.163586   36778 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 72yxn0.qrsfcagkngfj4gxi \
	I1209 22:49:45.163727   36778 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b \
	I1209 22:49:45.163762   36778 kubeadm.go:310] 	--control-plane 
	I1209 22:49:45.163771   36778 kubeadm.go:310] 
	I1209 22:49:45.163891   36778 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 22:49:45.163902   36778 kubeadm.go:310] 
	I1209 22:49:45.164042   36778 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 72yxn0.qrsfcagkngfj4gxi \
	I1209 22:49:45.164198   36778 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b 
	I1209 22:49:45.164453   36778 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 22:49:45.164487   36778 cni.go:84] Creating CNI manager for ""
	I1209 22:49:45.164497   36778 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 22:49:45.166869   36778 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1209 22:49:45.168578   36778 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 22:49:45.173867   36778 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1209 22:49:45.173890   36778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 22:49:45.193577   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 22:49:45.540330   36778 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 22:49:45.540400   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:45.540429   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-920193 minikube.k8s.io/updated_at=2024_12_09T22_49_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=ha-920193 minikube.k8s.io/primary=true
	I1209 22:49:45.563713   36778 ops.go:34] apiserver oom_adj: -16
	I1209 22:49:45.755027   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:46.255384   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:46.755819   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:47.255436   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:47.755914   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:48.255404   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:48.755938   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:49.255745   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:49.346913   36778 kubeadm.go:1113] duration metric: took 3.806571287s to wait for elevateKubeSystemPrivileges
	I1209 22:49:49.346942   36778 kubeadm.go:394] duration metric: took 15.578011127s to StartCluster
	I1209 22:49:49.346958   36778 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:49.347032   36778 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:49:49.347686   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:49.347889   36778 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:49:49.347901   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 22:49:49.347912   36778 start.go:241] waiting for startup goroutines ...
	I1209 22:49:49.347916   36778 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 22:49:49.347997   36778 addons.go:69] Setting storage-provisioner=true in profile "ha-920193"
	I1209 22:49:49.348008   36778 addons.go:69] Setting default-storageclass=true in profile "ha-920193"
	I1209 22:49:49.348018   36778 addons.go:234] Setting addon storage-provisioner=true in "ha-920193"
	I1209 22:49:49.348025   36778 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-920193"
	I1209 22:49:49.348059   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:49:49.348092   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:49:49.348366   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.348401   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.348486   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.348504   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.364294   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41179
	I1209 22:49:49.364762   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40635
	I1209 22:49:49.364808   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.365192   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.365331   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.365359   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.365654   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.365671   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.365700   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.365855   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:49:49.366017   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.366436   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.366477   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.367841   36778 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:49:49.368072   36778 kapi.go:59] client config for ha-920193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key", CAFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c9a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 22:49:49.368506   36778 cert_rotation.go:140] Starting client certificate rotation controller
	I1209 22:49:49.368728   36778 addons.go:234] Setting addon default-storageclass=true in "ha-920193"
	I1209 22:49:49.368759   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:49:49.368995   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.369045   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.381548   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44341
	I1209 22:49:49.382048   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.382623   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.382650   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.382946   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.383123   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:49:49.384085   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35753
	I1209 22:49:49.384563   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.385002   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:49.385074   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.385099   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.385406   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.385869   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.385898   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.387093   36778 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 22:49:49.388363   36778 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 22:49:49.388378   36778 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 22:49:49.388396   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:49.391382   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:49.391959   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:49.391988   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:49.392168   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:49.392369   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:49.392529   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:49.392718   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:49.402583   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37505
	I1209 22:49:49.403101   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.403703   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.403733   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.404140   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.404327   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:49:49.406048   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:49.406246   36778 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 22:49:49.406264   36778 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 22:49:49.406283   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:49.409015   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:49.409417   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:49.409445   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:49.409566   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:49.409736   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:49.409906   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:49.410051   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:49.469421   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 22:49:49.523797   36778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 22:49:49.572493   36778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 22:49:49.935058   36778 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1209 22:49:50.246776   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.246808   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.246866   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.246889   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.247109   36778 main.go:141] libmachine: (ha-920193) DBG | Closing plugin on server side
	I1209 22:49:50.247126   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.247142   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.247149   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.247150   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.247161   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.247168   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.247161   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.247214   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.247452   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.247465   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.247474   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.247491   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.247524   36778 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 22:49:50.247539   36778 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 22:49:50.247452   36778 main.go:141] libmachine: (ha-920193) DBG | Closing plugin on server side
	I1209 22:49:50.247679   36778 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1209 22:49:50.247688   36778 round_trippers.go:469] Request Headers:
	I1209 22:49:50.247699   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:49:50.247705   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:49:50.258818   36778 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1209 22:49:50.259388   36778 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1209 22:49:50.259405   36778 round_trippers.go:469] Request Headers:
	I1209 22:49:50.259415   36778 round_trippers.go:473]     Content-Type: application/json
	I1209 22:49:50.259421   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:49:50.259427   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:49:50.263578   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:49:50.263947   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.263973   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.264222   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.264298   36778 main.go:141] libmachine: (ha-920193) DBG | Closing plugin on server side
	I1209 22:49:50.264309   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.266759   36778 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1209 22:49:50.268058   36778 addons.go:510] duration metric: took 920.142906ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 22:49:50.268097   36778 start.go:246] waiting for cluster config update ...
	I1209 22:49:50.268112   36778 start.go:255] writing updated cluster config ...
	I1209 22:49:50.269702   36778 out.go:201] 
	I1209 22:49:50.271046   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:49:50.271126   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:49:50.272711   36778 out.go:177] * Starting "ha-920193-m02" control-plane node in "ha-920193" cluster
	I1209 22:49:50.273838   36778 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:49:50.273861   36778 cache.go:56] Caching tarball of preloaded images
	I1209 22:49:50.273946   36778 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 22:49:50.273960   36778 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 22:49:50.274036   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:49:50.274220   36778 start.go:360] acquireMachinesLock for ha-920193-m02: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 22:49:50.274272   36778 start.go:364] duration metric: took 30.506µs to acquireMachinesLock for "ha-920193-m02"
	I1209 22:49:50.274296   36778 start.go:93] Provisioning new machine with config: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:49:50.274418   36778 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1209 22:49:50.275986   36778 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 22:49:50.276071   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:50.276101   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:50.290689   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42005
	I1209 22:49:50.291090   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:50.291624   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:50.291657   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:50.291974   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:50.292165   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetMachineName
	I1209 22:49:50.292290   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:49:50.292460   36778 start.go:159] libmachine.API.Create for "ha-920193" (driver="kvm2")
	I1209 22:49:50.292488   36778 client.go:168] LocalClient.Create starting
	I1209 22:49:50.292523   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem
	I1209 22:49:50.292562   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:49:50.292580   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:49:50.292650   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem
	I1209 22:49:50.292677   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:49:50.292694   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:49:50.292719   36778 main.go:141] libmachine: Running pre-create checks...
	I1209 22:49:50.292730   36778 main.go:141] libmachine: (ha-920193-m02) Calling .PreCreateCheck
	I1209 22:49:50.292863   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetConfigRaw
	I1209 22:49:50.293207   36778 main.go:141] libmachine: Creating machine...
	I1209 22:49:50.293220   36778 main.go:141] libmachine: (ha-920193-m02) Calling .Create
	I1209 22:49:50.293319   36778 main.go:141] libmachine: (ha-920193-m02) Creating KVM machine...
	I1209 22:49:50.294569   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found existing default KVM network
	I1209 22:49:50.294708   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found existing private KVM network mk-ha-920193
	I1209 22:49:50.294863   36778 main.go:141] libmachine: (ha-920193-m02) Setting up store path in /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02 ...
	I1209 22:49:50.294888   36778 main.go:141] libmachine: (ha-920193-m02) Building disk image from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 22:49:50.294937   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:50.294840   37166 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:50.295026   36778 main.go:141] libmachine: (ha-920193-m02) Downloading /home/jenkins/minikube-integration/19888-18950/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 22:49:50.540657   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:50.540505   37166 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa...
	I1209 22:49:50.636978   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:50.636881   37166 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/ha-920193-m02.rawdisk...
	I1209 22:49:50.637002   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Writing magic tar header
	I1209 22:49:50.637012   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Writing SSH key tar header
	I1209 22:49:50.637092   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:50.637012   37166 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02 ...
	I1209 22:49:50.637134   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02
	I1209 22:49:50.637167   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02 (perms=drwx------)
	I1209 22:49:50.637189   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines (perms=drwxr-xr-x)
	I1209 22:49:50.637210   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines
	I1209 22:49:50.637225   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube (perms=drwxr-xr-x)
	I1209 22:49:50.637240   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950 (perms=drwxrwxr-x)
	I1209 22:49:50.637251   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 22:49:50.637263   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 22:49:50.637274   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:50.637286   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950
	I1209 22:49:50.637298   36778 main.go:141] libmachine: (ha-920193-m02) Creating domain...
	I1209 22:49:50.637312   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 22:49:50.637321   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins
	I1209 22:49:50.637330   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home
	I1209 22:49:50.637341   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Skipping /home - not owner
	I1209 22:49:50.638225   36778 main.go:141] libmachine: (ha-920193-m02) define libvirt domain using xml: 
	I1209 22:49:50.638247   36778 main.go:141] libmachine: (ha-920193-m02) <domain type='kvm'>
	I1209 22:49:50.638255   36778 main.go:141] libmachine: (ha-920193-m02)   <name>ha-920193-m02</name>
	I1209 22:49:50.638263   36778 main.go:141] libmachine: (ha-920193-m02)   <memory unit='MiB'>2200</memory>
	I1209 22:49:50.638271   36778 main.go:141] libmachine: (ha-920193-m02)   <vcpu>2</vcpu>
	I1209 22:49:50.638284   36778 main.go:141] libmachine: (ha-920193-m02)   <features>
	I1209 22:49:50.638291   36778 main.go:141] libmachine: (ha-920193-m02)     <acpi/>
	I1209 22:49:50.638306   36778 main.go:141] libmachine: (ha-920193-m02)     <apic/>
	I1209 22:49:50.638319   36778 main.go:141] libmachine: (ha-920193-m02)     <pae/>
	I1209 22:49:50.638328   36778 main.go:141] libmachine: (ha-920193-m02)     
	I1209 22:49:50.638333   36778 main.go:141] libmachine: (ha-920193-m02)   </features>
	I1209 22:49:50.638340   36778 main.go:141] libmachine: (ha-920193-m02)   <cpu mode='host-passthrough'>
	I1209 22:49:50.638346   36778 main.go:141] libmachine: (ha-920193-m02)   
	I1209 22:49:50.638356   36778 main.go:141] libmachine: (ha-920193-m02)   </cpu>
	I1209 22:49:50.638364   36778 main.go:141] libmachine: (ha-920193-m02)   <os>
	I1209 22:49:50.638380   36778 main.go:141] libmachine: (ha-920193-m02)     <type>hvm</type>
	I1209 22:49:50.638393   36778 main.go:141] libmachine: (ha-920193-m02)     <boot dev='cdrom'/>
	I1209 22:49:50.638403   36778 main.go:141] libmachine: (ha-920193-m02)     <boot dev='hd'/>
	I1209 22:49:50.638426   36778 main.go:141] libmachine: (ha-920193-m02)     <bootmenu enable='no'/>
	I1209 22:49:50.638448   36778 main.go:141] libmachine: (ha-920193-m02)   </os>
	I1209 22:49:50.638464   36778 main.go:141] libmachine: (ha-920193-m02)   <devices>
	I1209 22:49:50.638475   36778 main.go:141] libmachine: (ha-920193-m02)     <disk type='file' device='cdrom'>
	I1209 22:49:50.638507   36778 main.go:141] libmachine: (ha-920193-m02)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/boot2docker.iso'/>
	I1209 22:49:50.638533   36778 main.go:141] libmachine: (ha-920193-m02)       <target dev='hdc' bus='scsi'/>
	I1209 22:49:50.638547   36778 main.go:141] libmachine: (ha-920193-m02)       <readonly/>
	I1209 22:49:50.638559   36778 main.go:141] libmachine: (ha-920193-m02)     </disk>
	I1209 22:49:50.638570   36778 main.go:141] libmachine: (ha-920193-m02)     <disk type='file' device='disk'>
	I1209 22:49:50.638583   36778 main.go:141] libmachine: (ha-920193-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 22:49:50.638601   36778 main.go:141] libmachine: (ha-920193-m02)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/ha-920193-m02.rawdisk'/>
	I1209 22:49:50.638612   36778 main.go:141] libmachine: (ha-920193-m02)       <target dev='hda' bus='virtio'/>
	I1209 22:49:50.638623   36778 main.go:141] libmachine: (ha-920193-m02)     </disk>
	I1209 22:49:50.638632   36778 main.go:141] libmachine: (ha-920193-m02)     <interface type='network'>
	I1209 22:49:50.638641   36778 main.go:141] libmachine: (ha-920193-m02)       <source network='mk-ha-920193'/>
	I1209 22:49:50.638652   36778 main.go:141] libmachine: (ha-920193-m02)       <model type='virtio'/>
	I1209 22:49:50.638661   36778 main.go:141] libmachine: (ha-920193-m02)     </interface>
	I1209 22:49:50.638672   36778 main.go:141] libmachine: (ha-920193-m02)     <interface type='network'>
	I1209 22:49:50.638680   36778 main.go:141] libmachine: (ha-920193-m02)       <source network='default'/>
	I1209 22:49:50.638690   36778 main.go:141] libmachine: (ha-920193-m02)       <model type='virtio'/>
	I1209 22:49:50.638708   36778 main.go:141] libmachine: (ha-920193-m02)     </interface>
	I1209 22:49:50.638726   36778 main.go:141] libmachine: (ha-920193-m02)     <serial type='pty'>
	I1209 22:49:50.638741   36778 main.go:141] libmachine: (ha-920193-m02)       <target port='0'/>
	I1209 22:49:50.638748   36778 main.go:141] libmachine: (ha-920193-m02)     </serial>
	I1209 22:49:50.638756   36778 main.go:141] libmachine: (ha-920193-m02)     <console type='pty'>
	I1209 22:49:50.638764   36778 main.go:141] libmachine: (ha-920193-m02)       <target type='serial' port='0'/>
	I1209 22:49:50.638775   36778 main.go:141] libmachine: (ha-920193-m02)     </console>
	I1209 22:49:50.638784   36778 main.go:141] libmachine: (ha-920193-m02)     <rng model='virtio'>
	I1209 22:49:50.638793   36778 main.go:141] libmachine: (ha-920193-m02)       <backend model='random'>/dev/random</backend>
	I1209 22:49:50.638807   36778 main.go:141] libmachine: (ha-920193-m02)     </rng>
	I1209 22:49:50.638821   36778 main.go:141] libmachine: (ha-920193-m02)     
	I1209 22:49:50.638836   36778 main.go:141] libmachine: (ha-920193-m02)     
	I1209 22:49:50.638854   36778 main.go:141] libmachine: (ha-920193-m02)   </devices>
	I1209 22:49:50.638870   36778 main.go:141] libmachine: (ha-920193-m02) </domain>
	I1209 22:49:50.638881   36778 main.go:141] libmachine: (ha-920193-m02) 
	I1209 22:49:50.645452   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:4e:0e:44 in network default
	I1209 22:49:50.646094   36778 main.go:141] libmachine: (ha-920193-m02) Ensuring networks are active...
	I1209 22:49:50.646118   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:50.646792   36778 main.go:141] libmachine: (ha-920193-m02) Ensuring network default is active
	I1209 22:49:50.647136   36778 main.go:141] libmachine: (ha-920193-m02) Ensuring network mk-ha-920193 is active
	I1209 22:49:50.647479   36778 main.go:141] libmachine: (ha-920193-m02) Getting domain xml...
	I1209 22:49:50.648166   36778 main.go:141] libmachine: (ha-920193-m02) Creating domain...
	I1209 22:49:51.846569   36778 main.go:141] libmachine: (ha-920193-m02) Waiting to get IP...
	I1209 22:49:51.847529   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:51.847984   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:51.848045   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:51.847987   37166 retry.go:31] will retry after 223.150886ms: waiting for machine to come up
	I1209 22:49:52.072674   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:52.073143   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:52.073214   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:52.073119   37166 retry.go:31] will retry after 342.157886ms: waiting for machine to come up
	I1209 22:49:52.416515   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:52.416911   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:52.416933   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:52.416873   37166 retry.go:31] will retry after 319.618715ms: waiting for machine to come up
	I1209 22:49:52.738511   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:52.739067   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:52.739096   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:52.739025   37166 retry.go:31] will retry after 426.813714ms: waiting for machine to come up
	I1209 22:49:53.167672   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:53.168111   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:53.168139   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:53.168063   37166 retry.go:31] will retry after 465.129361ms: waiting for machine to come up
	I1209 22:49:53.634495   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:53.635006   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:53.635033   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:53.634965   37166 retry.go:31] will retry after 688.228763ms: waiting for machine to come up
	I1209 22:49:54.324368   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:54.324751   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:54.324780   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:54.324706   37166 retry.go:31] will retry after 952.948713ms: waiting for machine to come up
	I1209 22:49:55.278732   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:55.279052   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:55.279084   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:55.279025   37166 retry.go:31] will retry after 1.032940312s: waiting for machine to come up
	I1209 22:49:56.313177   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:56.313589   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:56.313613   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:56.313562   37166 retry.go:31] will retry after 1.349167493s: waiting for machine to come up
	I1209 22:49:57.664618   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:57.665031   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:57.665060   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:57.664986   37166 retry.go:31] will retry after 1.512445541s: waiting for machine to come up
	I1209 22:49:59.179536   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:59.179914   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:59.179939   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:59.179864   37166 retry.go:31] will retry after 2.399970974s: waiting for machine to come up
	I1209 22:50:01.582227   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:01.582662   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:50:01.582690   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:50:01.582599   37166 retry.go:31] will retry after 2.728474301s: waiting for machine to come up
	I1209 22:50:04.312490   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:04.312880   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:50:04.312905   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:50:04.312847   37166 retry.go:31] will retry after 4.276505546s: waiting for machine to come up
	I1209 22:50:08.590485   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:08.590927   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:50:08.590949   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:50:08.590889   37166 retry.go:31] will retry after 4.29966265s: waiting for machine to come up
	I1209 22:50:12.892743   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:12.893228   36778 main.go:141] libmachine: (ha-920193-m02) Found IP for machine: 192.168.39.43
	I1209 22:50:12.893253   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has current primary IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:12.893261   36778 main.go:141] libmachine: (ha-920193-m02) Reserving static IP address...
	I1209 22:50:12.893598   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find host DHCP lease matching {name: "ha-920193-m02", mac: "52:54:00:e3:b9:61", ip: "192.168.39.43"} in network mk-ha-920193
	I1209 22:50:12.967208   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Getting to WaitForSSH function...
	I1209 22:50:12.967241   36778 main.go:141] libmachine: (ha-920193-m02) Reserved static IP address: 192.168.39.43
	I1209 22:50:12.967255   36778 main.go:141] libmachine: (ha-920193-m02) Waiting for SSH to be available...
	I1209 22:50:12.969615   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:12.969971   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:12.969998   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:12.970158   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Using SSH client type: external
	I1209 22:50:12.970180   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa (-rw-------)
	I1209 22:50:12.970211   36778 main.go:141] libmachine: (ha-920193-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 22:50:12.970224   36778 main.go:141] libmachine: (ha-920193-m02) DBG | About to run SSH command:
	I1209 22:50:12.970270   36778 main.go:141] libmachine: (ha-920193-m02) DBG | exit 0
	I1209 22:50:13.099696   36778 main.go:141] libmachine: (ha-920193-m02) DBG | SSH cmd err, output: <nil>: 
	I1209 22:50:13.100005   36778 main.go:141] libmachine: (ha-920193-m02) KVM machine creation complete!
	I1209 22:50:13.100244   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetConfigRaw
	I1209 22:50:13.100810   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:13.100988   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:13.101128   36778 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 22:50:13.101154   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetState
	I1209 22:50:13.102588   36778 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 22:50:13.102600   36778 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 22:50:13.102605   36778 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 22:50:13.102611   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.105041   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.105398   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.105421   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.105634   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.105791   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.105931   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.106034   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.106172   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.106381   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.106392   36778 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 22:50:13.214686   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:50:13.214707   36778 main.go:141] libmachine: Detecting the provisioner...
	I1209 22:50:13.214714   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.217518   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.217915   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.217939   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.218093   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.218249   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.218422   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.218594   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.218762   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.218925   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.218936   36778 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 22:50:13.328344   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 22:50:13.328426   36778 main.go:141] libmachine: found compatible host: buildroot
	I1209 22:50:13.328436   36778 main.go:141] libmachine: Provisioning with buildroot...
	I1209 22:50:13.328445   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetMachineName
	I1209 22:50:13.328699   36778 buildroot.go:166] provisioning hostname "ha-920193-m02"
	I1209 22:50:13.328724   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetMachineName
	I1209 22:50:13.328916   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.331720   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.332124   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.332160   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.332317   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.332518   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.332696   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.332887   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.333073   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.333230   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.333241   36778 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-920193-m02 && echo "ha-920193-m02" | sudo tee /etc/hostname
	I1209 22:50:13.453959   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-920193-m02
	
	I1209 22:50:13.453993   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.457007   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.457414   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.457445   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.457635   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.457816   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.457961   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.458096   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.458282   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.458465   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.458486   36778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-920193-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-920193-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-920193-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 22:50:13.575704   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:50:13.575734   36778 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 22:50:13.575756   36778 buildroot.go:174] setting up certificates
	I1209 22:50:13.575768   36778 provision.go:84] configureAuth start
	I1209 22:50:13.575777   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetMachineName
	I1209 22:50:13.576037   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetIP
	I1209 22:50:13.578662   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.579132   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.579159   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.579337   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.581290   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.581592   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.581613   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.581740   36778 provision.go:143] copyHostCerts
	I1209 22:50:13.581770   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:50:13.581820   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 22:50:13.581832   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:50:13.581924   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 22:50:13.582006   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:50:13.582026   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 22:50:13.582033   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:50:13.582058   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 22:50:13.582102   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:50:13.582122   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 22:50:13.582131   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:50:13.582166   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 22:50:13.582231   36778 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.ha-920193-m02 san=[127.0.0.1 192.168.39.43 ha-920193-m02 localhost minikube]
	I1209 22:50:13.756786   36778 provision.go:177] copyRemoteCerts
	I1209 22:50:13.756844   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 22:50:13.756875   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.759281   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.759620   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.759646   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.759868   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.760043   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.760166   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.760302   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa Username:docker}
	I1209 22:50:13.842746   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 22:50:13.842829   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 22:50:13.868488   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 22:50:13.868558   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 22:50:13.894237   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 22:50:13.894300   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 22:50:13.919207   36778 provision.go:87] duration metric: took 343.427038ms to configureAuth
	I1209 22:50:13.919237   36778 buildroot.go:189] setting minikube options for container-runtime
	I1209 22:50:13.919436   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:50:13.919529   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.922321   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.922667   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.922689   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.922943   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.923101   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.923227   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.923381   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.923527   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.923766   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.923783   36778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 22:50:14.145275   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 22:50:14.145304   36778 main.go:141] libmachine: Checking connection to Docker...
	I1209 22:50:14.145313   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetURL
	I1209 22:50:14.146583   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Using libvirt version 6000000
	I1209 22:50:14.148809   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.149140   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.149168   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.149302   36778 main.go:141] libmachine: Docker is up and running!
	I1209 22:50:14.149316   36778 main.go:141] libmachine: Reticulating splines...
	I1209 22:50:14.149322   36778 client.go:171] duration metric: took 23.856827848s to LocalClient.Create
	I1209 22:50:14.149351   36778 start.go:167] duration metric: took 23.856891761s to libmachine.API.Create "ha-920193"
	I1209 22:50:14.149370   36778 start.go:293] postStartSetup for "ha-920193-m02" (driver="kvm2")
	I1209 22:50:14.149387   36778 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 22:50:14.149412   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.149683   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 22:50:14.149706   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:14.152301   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.152593   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.152623   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.152758   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:14.152950   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.153102   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:14.153238   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa Username:docker}
	I1209 22:50:14.237586   36778 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 22:50:14.241320   36778 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 22:50:14.241353   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 22:50:14.241430   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 22:50:14.241512   36778 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 22:50:14.241522   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /etc/ssl/certs/262532.pem
	I1209 22:50:14.241599   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 22:50:14.250940   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:50:14.273559   36778 start.go:296] duration metric: took 124.171367ms for postStartSetup
	I1209 22:50:14.273622   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetConfigRaw
	I1209 22:50:14.274207   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetIP
	I1209 22:50:14.276819   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.277127   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.277156   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.277340   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:50:14.277540   36778 start.go:128] duration metric: took 24.003111268s to createHost
	I1209 22:50:14.277563   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:14.279937   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.280232   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.280257   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.280382   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:14.280557   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.280726   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.280910   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:14.281099   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:14.281291   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:14.281304   36778 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 22:50:14.388152   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733784614.364424625
	
	I1209 22:50:14.388174   36778 fix.go:216] guest clock: 1733784614.364424625
	I1209 22:50:14.388181   36778 fix.go:229] Guest: 2024-12-09 22:50:14.364424625 +0000 UTC Remote: 2024-12-09 22:50:14.27755238 +0000 UTC m=+71.170238927 (delta=86.872245ms)
	I1209 22:50:14.388195   36778 fix.go:200] guest clock delta is within tolerance: 86.872245ms
	I1209 22:50:14.388200   36778 start.go:83] releasing machines lock for "ha-920193-m02", held for 24.113917393s
	I1209 22:50:14.388222   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.388471   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetIP
	I1209 22:50:14.391084   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.391432   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.391458   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.393935   36778 out.go:177] * Found network options:
	I1209 22:50:14.395356   36778 out.go:177]   - NO_PROXY=192.168.39.102
	W1209 22:50:14.396713   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 22:50:14.396769   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.397373   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.397558   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.397653   36778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 22:50:14.397697   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	W1209 22:50:14.397767   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 22:50:14.397855   36778 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 22:50:14.397879   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:14.400330   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.400563   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.400725   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.400755   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.400909   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:14.400944   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.400970   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.401106   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.401188   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:14.401275   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:14.401373   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.401443   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa Username:docker}
	I1209 22:50:14.401504   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:14.401614   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa Username:docker}
	I1209 22:50:14.637188   36778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 22:50:14.643200   36778 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 22:50:14.643281   36778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 22:50:14.659398   36778 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 22:50:14.659426   36778 start.go:495] detecting cgroup driver to use...
	I1209 22:50:14.659491   36778 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 22:50:14.676247   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 22:50:14.690114   36778 docker.go:217] disabling cri-docker service (if available) ...
	I1209 22:50:14.690183   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 22:50:14.704181   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 22:50:14.718407   36778 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 22:50:14.836265   36778 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 22:50:14.977440   36778 docker.go:233] disabling docker service ...
	I1209 22:50:14.977523   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 22:50:14.992218   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 22:50:15.006032   36778 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 22:50:15.132938   36778 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 22:50:15.246587   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 22:50:15.260594   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 22:50:15.278081   36778 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 22:50:15.278144   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.288215   36778 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 22:50:15.288291   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.298722   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.309333   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.319278   36778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 22:50:15.329514   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.339686   36778 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.356544   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.367167   36778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 22:50:15.376313   36778 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 22:50:15.376368   36778 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 22:50:15.389607   36778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 22:50:15.399026   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:50:15.510388   36778 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 22:50:15.594142   36778 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 22:50:15.594209   36778 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 22:50:15.598620   36778 start.go:563] Will wait 60s for crictl version
	I1209 22:50:15.598673   36778 ssh_runner.go:195] Run: which crictl
	I1209 22:50:15.602047   36778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 22:50:15.640250   36778 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 22:50:15.640331   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:50:15.667027   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:50:15.696782   36778 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 22:50:15.698100   36778 out.go:177]   - env NO_PROXY=192.168.39.102
	I1209 22:50:15.699295   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetIP
	I1209 22:50:15.701971   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:15.702367   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:15.702391   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:15.702593   36778 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 22:50:15.706559   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:50:15.719413   36778 mustload.go:65] Loading cluster: ha-920193
	I1209 22:50:15.719679   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:50:15.720045   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:50:15.720080   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:50:15.735359   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34745
	I1209 22:50:15.735806   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:50:15.736258   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:50:15.736277   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:50:15.736597   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:50:15.736809   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:50:15.738383   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:50:15.738784   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:50:15.738819   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:50:15.754087   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34025
	I1209 22:50:15.754545   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:50:15.755016   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:50:15.755039   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:50:15.755363   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:50:15.755658   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:50:15.755811   36778 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193 for IP: 192.168.39.43
	I1209 22:50:15.755825   36778 certs.go:194] generating shared ca certs ...
	I1209 22:50:15.755842   36778 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:50:15.756003   36778 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 22:50:15.756062   36778 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 22:50:15.756077   36778 certs.go:256] generating profile certs ...
	I1209 22:50:15.756191   36778 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key
	I1209 22:50:15.756224   36778 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.4dc3270a
	I1209 22:50:15.756244   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.4dc3270a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.43 192.168.39.254]
	I1209 22:50:15.922567   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.4dc3270a ...
	I1209 22:50:15.922607   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.4dc3270a: {Name:mkdd9b3ceabde3bba17fb86e02452182c7c5df88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:50:15.922833   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.4dc3270a ...
	I1209 22:50:15.922852   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.4dc3270a: {Name:mkf2dc6e973669b6272c7472a098255f36b1b21a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:50:15.922964   36778 certs.go:381] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.4dc3270a -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt
	I1209 22:50:15.923108   36778 certs.go:385] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.4dc3270a -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key
	I1209 22:50:15.923250   36778 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key
	I1209 22:50:15.923268   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 22:50:15.923283   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 22:50:15.923300   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 22:50:15.923315   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 22:50:15.923331   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 22:50:15.923346   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 22:50:15.923361   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 22:50:15.923376   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 22:50:15.923447   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 22:50:15.923481   36778 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 22:50:15.923492   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 22:50:15.923526   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 22:50:15.923552   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 22:50:15.923617   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 22:50:15.923669   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:50:15.923701   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /usr/share/ca-certificates/262532.pem
	I1209 22:50:15.923718   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:50:15.923736   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem -> /usr/share/ca-certificates/26253.pem
	I1209 22:50:15.923774   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:50:15.926684   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:50:15.927100   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:50:15.927132   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:50:15.927316   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:50:15.927520   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:50:15.927686   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:50:15.927817   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:50:15.995984   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1209 22:50:16.000689   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1209 22:50:16.010769   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1209 22:50:16.015461   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1209 22:50:16.025382   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1209 22:50:16.029170   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1209 22:50:16.038869   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1209 22:50:16.042928   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1209 22:50:16.052680   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1209 22:50:16.056624   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1209 22:50:16.067154   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1209 22:50:16.071136   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1209 22:50:16.081380   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 22:50:16.105907   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 22:50:16.130202   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 22:50:16.154712   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 22:50:16.178136   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1209 22:50:16.201144   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 22:50:16.223968   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 22:50:16.245967   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 22:50:16.268545   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 22:50:16.290945   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 22:50:16.313125   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 22:50:16.335026   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1209 22:50:16.350896   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1209 22:50:16.366797   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1209 22:50:16.382304   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1209 22:50:16.398151   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1209 22:50:16.413542   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1209 22:50:16.428943   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1209 22:50:16.443894   36778 ssh_runner.go:195] Run: openssl version
	I1209 22:50:16.449370   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 22:50:16.460122   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 22:50:16.464413   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 22:50:16.464474   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 22:50:16.470266   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 22:50:16.480854   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 22:50:16.491307   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 22:50:16.495420   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 22:50:16.495468   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 22:50:16.500658   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 22:50:16.511025   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 22:50:16.521204   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:50:16.525268   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:50:16.525347   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:50:16.530531   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 22:50:16.542187   36778 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 22:50:16.546109   36778 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 22:50:16.546164   36778 kubeadm.go:934] updating node {m02 192.168.39.43 8443 v1.31.2 crio true true} ...
	I1209 22:50:16.546250   36778 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-920193-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 22:50:16.546279   36778 kube-vip.go:115] generating kube-vip config ...
	I1209 22:50:16.546321   36778 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 22:50:16.565259   36778 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 22:50:16.565317   36778 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 22:50:16.565368   36778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 22:50:16.576227   36778 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1209 22:50:16.576286   36778 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1209 22:50:16.587283   36778 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1209 22:50:16.587313   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 22:50:16.587347   36778 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1209 22:50:16.587371   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 22:50:16.587429   36778 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1209 22:50:16.591406   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1209 22:50:16.591443   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1209 22:50:17.403840   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 22:50:17.403917   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 22:50:17.408515   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1209 22:50:17.408550   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1209 22:50:17.508668   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:50:17.539619   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 22:50:17.539709   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 22:50:17.547698   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1209 22:50:17.547746   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1209 22:50:17.976645   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1209 22:50:17.986050   36778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 22:50:18.001981   36778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 22:50:18.017737   36778 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 22:50:18.034382   36778 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 22:50:18.038243   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:50:18.051238   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:50:18.168167   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:50:18.185010   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:50:18.185466   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:50:18.185511   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:50:18.200608   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36771
	I1209 22:50:18.201083   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:50:18.201577   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:50:18.201599   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:50:18.201983   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:50:18.202177   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:50:18.202335   36778 start.go:317] joinCluster: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:50:18.202454   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1209 22:50:18.202478   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:50:18.205838   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:50:18.206272   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:50:18.206305   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:50:18.206454   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:50:18.206651   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:50:18.206809   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:50:18.206953   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:50:18.346102   36778 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:50:18.346151   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0e0mum.qzhjvrjwvxlgpdn7 --discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-920193-m02 --control-plane --apiserver-advertise-address=192.168.39.43 --apiserver-bind-port=8443"
	I1209 22:50:38.220755   36778 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0e0mum.qzhjvrjwvxlgpdn7 --discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-920193-m02 --control-plane --apiserver-advertise-address=192.168.39.43 --apiserver-bind-port=8443": (19.874577958s)
	I1209 22:50:38.220795   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1209 22:50:38.605694   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-920193-m02 minikube.k8s.io/updated_at=2024_12_09T22_50_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=ha-920193 minikube.k8s.io/primary=false
	I1209 22:50:38.732046   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-920193-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1209 22:50:38.853470   36778 start.go:319] duration metric: took 20.651129665s to joinCluster
	I1209 22:50:38.853557   36778 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:50:38.853987   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:50:38.855541   36778 out.go:177] * Verifying Kubernetes components...
	I1209 22:50:38.856758   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:50:39.134622   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:50:39.155772   36778 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:50:39.156095   36778 kapi.go:59] client config for ha-920193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key", CAFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c9a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1209 22:50:39.156174   36778 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I1209 22:50:39.156458   36778 node_ready.go:35] waiting up to 6m0s for node "ha-920193-m02" to be "Ready" ...
	I1209 22:50:39.156557   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:39.156569   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:39.156580   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:39.156589   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:39.166040   36778 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1209 22:50:39.656808   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:39.656835   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:39.656848   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:39.656853   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:39.660666   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:40.157282   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:40.157306   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:40.157314   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:40.157319   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:40.171594   36778 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1209 22:50:40.656953   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:40.656975   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:40.656984   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:40.656988   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:40.660321   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:41.157246   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:41.157267   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:41.157275   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:41.157278   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:41.160595   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:41.161242   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:41.657713   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:41.657743   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:41.657754   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:41.657760   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:41.661036   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:42.157055   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:42.157081   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:42.157092   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:42.157098   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:42.160406   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:42.657502   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:42.657525   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:42.657535   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:42.657543   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:42.660437   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:43.157580   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:43.157601   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:43.157610   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:43.157614   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:43.159874   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:43.657603   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:43.657624   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:43.657631   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:43.657635   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:43.661418   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:43.662212   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:44.157154   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:44.157180   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:44.157193   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:44.157199   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:44.160641   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:44.657594   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:44.657632   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:44.657639   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:44.657643   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:44.660444   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:45.156643   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:45.156665   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:45.156673   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:45.156678   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:45.159591   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:45.656824   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:45.656848   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:45.656860   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:45.656865   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:45.660567   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:46.157410   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:46.157431   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:46.157440   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:46.157444   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:46.164952   36778 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1209 22:50:46.165425   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:46.656667   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:46.656688   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:46.656695   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:46.656701   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:46.660336   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:47.157296   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:47.157321   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:47.157329   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:47.157332   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:47.160332   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:47.657301   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:47.657323   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:47.657331   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:47.657336   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:47.660325   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:48.157563   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:48.157584   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:48.157594   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:48.157608   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:48.160951   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:48.657246   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:48.657273   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:48.657284   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:48.657292   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:48.660393   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:48.661028   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:49.157387   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:49.157407   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:49.157413   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:49.157418   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:49.160553   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:49.656857   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:49.656876   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:49.656884   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:49.656887   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:49.660150   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:50.157105   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:50.157127   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:50.157135   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:50.157138   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:50.160132   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:50.657157   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:50.657175   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:50.657183   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:50.657186   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:50.660060   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:51.156681   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:51.156703   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:51.156710   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:51.156715   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:51.160061   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:51.160485   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:51.656792   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:51.656814   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:51.656822   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:51.656828   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:51.660462   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:52.157422   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:52.157444   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:52.157452   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:52.157456   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:52.160620   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:52.657587   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:52.657612   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:52.657623   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:52.657635   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:52.661805   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:50:53.156794   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:53.156813   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:53.156820   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:53.156824   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:53.159611   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:53.657422   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:53.657443   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:53.657451   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:53.657456   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:53.660973   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:53.661490   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:54.156741   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:54.156775   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:54.156788   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:54.156793   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:54.159842   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:54.657520   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:54.657542   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:54.657551   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:54.657556   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:54.661360   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:55.157356   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:55.157381   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:55.157389   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:55.157398   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:55.160974   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:55.657357   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:55.657380   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:55.657386   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:55.657389   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:55.661109   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:55.661633   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:56.156805   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:56.156829   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:56.156842   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:56.156848   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:56.159652   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:56.657355   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:56.657382   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:56.657391   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:56.657396   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:56.660284   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.156798   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:57.156817   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.156825   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.156828   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.159439   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.160184   36778 node_ready.go:49] node "ha-920193-m02" has status "Ready":"True"
	I1209 22:50:57.160211   36778 node_ready.go:38] duration metric: took 18.003728094s for node "ha-920193-m02" to be "Ready" ...
	I1209 22:50:57.160219   36778 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:50:57.160281   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:50:57.160291   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.160297   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.160301   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.163826   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.171109   36778 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.171198   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9792g
	I1209 22:50:57.171207   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.171215   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.171218   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.175686   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:50:57.176418   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.176433   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.176440   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.176445   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.178918   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.179482   36778 pod_ready.go:93] pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.179502   36778 pod_ready.go:82] duration metric: took 8.366716ms for pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.179511   36778 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.179579   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pftgv
	I1209 22:50:57.179590   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.179601   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.179607   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.181884   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.182566   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.182584   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.182593   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.182603   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.184849   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.185336   36778 pod_ready.go:93] pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.185356   36778 pod_ready.go:82] duration metric: took 5.835616ms for pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.185369   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.185431   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193
	I1209 22:50:57.185440   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.185446   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.185452   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.187419   36778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1209 22:50:57.188120   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.188138   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.188148   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.188155   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.190287   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.190719   36778 pod_ready.go:93] pod "etcd-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.190736   36778 pod_ready.go:82] duration metric: took 5.359942ms for pod "etcd-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.190748   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.190809   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193-m02
	I1209 22:50:57.190819   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.190828   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.190835   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.192882   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.193624   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:57.193638   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.193645   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.193648   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.195725   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.196308   36778 pod_ready.go:93] pod "etcd-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.196330   36778 pod_ready.go:82] duration metric: took 5.570375ms for pod "etcd-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.196346   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.357701   36778 request.go:632] Waited for 161.300261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193
	I1209 22:50:57.357803   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193
	I1209 22:50:57.357815   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.357826   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.357831   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.361143   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.557163   36778 request.go:632] Waited for 195.392304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.557255   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.557275   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.557286   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.557299   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.560687   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.561270   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.561292   36778 pod_ready.go:82] duration metric: took 364.939583ms for pod "kube-apiserver-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.561303   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.757400   36778 request.go:632] Waited for 196.034135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m02
	I1209 22:50:57.757501   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m02
	I1209 22:50:57.757514   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.757525   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.757533   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.761021   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.957152   36778 request.go:632] Waited for 195.395123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:57.957252   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:57.957262   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.957269   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.957273   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.961000   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.961523   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.961541   36778 pod_ready.go:82] duration metric: took 400.228352ms for pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.961551   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.157823   36778 request.go:632] Waited for 196.207607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193
	I1209 22:50:58.157936   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193
	I1209 22:50:58.157948   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.157956   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.157960   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.161121   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:58.357017   36778 request.go:632] Waited for 194.771557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:58.357073   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:58.357091   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.357099   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.357103   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.360088   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:58.360518   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:58.360541   36778 pod_ready.go:82] duration metric: took 398.983882ms for pod "kube-controller-manager-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.360551   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.557689   36778 request.go:632] Waited for 197.047701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m02
	I1209 22:50:58.557763   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m02
	I1209 22:50:58.557772   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.557779   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.557783   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.561314   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:58.757454   36778 request.go:632] Waited for 195.361025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:58.757514   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:58.757519   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.757531   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.757540   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.760353   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:58.760931   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:58.760952   36778 pod_ready.go:82] duration metric: took 400.394843ms for pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.760961   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lntbt" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.956933   36778 request.go:632] Waited for 195.877051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lntbt
	I1209 22:50:58.956989   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lntbt
	I1209 22:50:58.956993   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.957001   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.957005   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.960313   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:59.157481   36778 request.go:632] Waited for 196.370711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:59.157545   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:59.157551   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.157558   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.157562   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.160790   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:59.161308   36778 pod_ready.go:93] pod "kube-proxy-lntbt" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:59.161325   36778 pod_ready.go:82] duration metric: took 400.358082ms for pod "kube-proxy-lntbt" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.161334   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r8nhm" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.357539   36778 request.go:632] Waited for 196.144123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r8nhm
	I1209 22:50:59.357600   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r8nhm
	I1209 22:50:59.357605   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.357614   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.357619   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.360709   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:59.557525   36778 request.go:632] Waited for 196.134266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:59.557582   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:59.557587   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.557594   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.557599   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.561037   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:59.561650   36778 pod_ready.go:93] pod "kube-proxy-r8nhm" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:59.561671   36778 pod_ready.go:82] duration metric: took 400.330133ms for pod "kube-proxy-r8nhm" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.561686   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.757716   36778 request.go:632] Waited for 195.957167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193
	I1209 22:50:59.757794   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193
	I1209 22:50:59.757799   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.757806   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.757810   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.760629   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:59.957516   36778 request.go:632] Waited for 196.356707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:59.957571   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:59.957576   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.957583   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.957589   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.960569   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:59.961033   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:59.961052   36778 pod_ready.go:82] duration metric: took 399.355328ms for pod "kube-scheduler-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.961065   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:51:00.157215   36778 request.go:632] Waited for 196.068129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m02
	I1209 22:51:00.157354   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m02
	I1209 22:51:00.157371   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.157385   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.157393   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.160825   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:00.357607   36778 request.go:632] Waited for 196.256861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:51:00.357660   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:51:00.357665   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.357673   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.357676   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.360928   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:00.361370   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:51:00.361388   36778 pod_ready.go:82] duration metric: took 400.315143ms for pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:51:00.361398   36778 pod_ready.go:39] duration metric: took 3.201168669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:51:00.361416   36778 api_server.go:52] waiting for apiserver process to appear ...
	I1209 22:51:00.361461   36778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 22:51:00.375321   36778 api_server.go:72] duration metric: took 21.521720453s to wait for apiserver process to appear ...
	I1209 22:51:00.375346   36778 api_server.go:88] waiting for apiserver healthz status ...
	I1209 22:51:00.375364   36778 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I1209 22:51:00.379577   36778 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I1209 22:51:00.379640   36778 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I1209 22:51:00.379648   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.379656   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.379662   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.380589   36778 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1209 22:51:00.380716   36778 api_server.go:141] control plane version: v1.31.2
	I1209 22:51:00.380756   36778 api_server.go:131] duration metric: took 5.402425ms to wait for apiserver health ...
	I1209 22:51:00.380766   36778 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 22:51:00.557205   36778 request.go:632] Waited for 176.35448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:51:00.557271   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:51:00.557277   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.557284   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.557289   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.561926   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:00.568583   36778 system_pods.go:59] 17 kube-system pods found
	I1209 22:51:00.568619   36778 system_pods.go:61] "coredns-7c65d6cfc9-9792g" [61c6326d-4d13-49b7-9fae-c4f8906c3c7e] Running
	I1209 22:51:00.568631   36778 system_pods.go:61] "coredns-7c65d6cfc9-pftgv" [eb45cffe-b666-435b-ada3-605735ee43c1] Running
	I1209 22:51:00.568637   36778 system_pods.go:61] "etcd-ha-920193" [70c28c05-bf86-4a56-9863-0e836de39230] Running
	I1209 22:51:00.568643   36778 system_pods.go:61] "etcd-ha-920193-m02" [b1746065-266b-487c-8aa5-629d5b421a3b] Running
	I1209 22:51:00.568648   36778 system_pods.go:61] "kindnet-7bbbc" [76f14f4c-3759-45b3-8161-78d85c44b5a6] Running
	I1209 22:51:00.568652   36778 system_pods.go:61] "kindnet-rcctv" [e9580e78-cfcb-45b6-9e90-f37ce3881c6b] Running
	I1209 22:51:00.568657   36778 system_pods.go:61] "kube-apiserver-ha-920193" [9c15e33e-f7c1-4b0b-84de-718043e0ea1b] Running
	I1209 22:51:00.568662   36778 system_pods.go:61] "kube-apiserver-ha-920193-m02" [d05fff94-7b3f-4127-8164-3c1a838bb71c] Running
	I1209 22:51:00.568672   36778 system_pods.go:61] "kube-controller-manager-ha-920193" [4a411c41-cba8-4cb3-beec-fcd3f9a8ade1] Running
	I1209 22:51:00.568677   36778 system_pods.go:61] "kube-controller-manager-ha-920193-m02" [57181858-4c99-4afa-b94c-4ba320fc293d] Running
	I1209 22:51:00.568681   36778 system_pods.go:61] "kube-proxy-lntbt" [8eb5538d-7ac0-4949-b7f9-29d5530f4521] Running
	I1209 22:51:00.568687   36778 system_pods.go:61] "kube-proxy-r8nhm" [355570de-1c30-4aa2-a56c-a06639cc339c] Running
	I1209 22:51:00.568692   36778 system_pods.go:61] "kube-scheduler-ha-920193" [60b5be1e-f9fe-4c30-b435-7d321c484bc4] Running
	I1209 22:51:00.568699   36778 system_pods.go:61] "kube-scheduler-ha-920193-m02" [a35e79b6-6cf2-4669-95fb-c1f358c811d2] Running
	I1209 22:51:00.568703   36778 system_pods.go:61] "kube-vip-ha-920193" [4f238d3a-6219-4e4d-89a8-22f73def8440] Running
	I1209 22:51:00.568709   36778 system_pods.go:61] "kube-vip-ha-920193-m02" [0fe8360d-b418-47dd-bcce-a8d5a6c4c29d] Running
	I1209 22:51:00.568713   36778 system_pods.go:61] "storage-provisioner" [cb31984d-7602-406c-8c77-ce8571cdaa52] Running
	I1209 22:51:00.568720   36778 system_pods.go:74] duration metric: took 187.947853ms to wait for pod list to return data ...
	I1209 22:51:00.568736   36778 default_sa.go:34] waiting for default service account to be created ...
	I1209 22:51:00.757459   36778 request.go:632] Waited for 188.649373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1209 22:51:00.757529   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1209 22:51:00.757535   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.757542   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.757549   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.761133   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:00.761462   36778 default_sa.go:45] found service account: "default"
	I1209 22:51:00.761484   36778 default_sa.go:55] duration metric: took 192.741843ms for default service account to be created ...
	I1209 22:51:00.761493   36778 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 22:51:00.957815   36778 request.go:632] Waited for 196.251364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:51:00.957869   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:51:00.957874   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.957881   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.957886   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.962434   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:00.967784   36778 system_pods.go:86] 17 kube-system pods found
	I1209 22:51:00.967807   36778 system_pods.go:89] "coredns-7c65d6cfc9-9792g" [61c6326d-4d13-49b7-9fae-c4f8906c3c7e] Running
	I1209 22:51:00.967813   36778 system_pods.go:89] "coredns-7c65d6cfc9-pftgv" [eb45cffe-b666-435b-ada3-605735ee43c1] Running
	I1209 22:51:00.967818   36778 system_pods.go:89] "etcd-ha-920193" [70c28c05-bf86-4a56-9863-0e836de39230] Running
	I1209 22:51:00.967822   36778 system_pods.go:89] "etcd-ha-920193-m02" [b1746065-266b-487c-8aa5-629d5b421a3b] Running
	I1209 22:51:00.967825   36778 system_pods.go:89] "kindnet-7bbbc" [76f14f4c-3759-45b3-8161-78d85c44b5a6] Running
	I1209 22:51:00.967829   36778 system_pods.go:89] "kindnet-rcctv" [e9580e78-cfcb-45b6-9e90-f37ce3881c6b] Running
	I1209 22:51:00.967832   36778 system_pods.go:89] "kube-apiserver-ha-920193" [9c15e33e-f7c1-4b0b-84de-718043e0ea1b] Running
	I1209 22:51:00.967836   36778 system_pods.go:89] "kube-apiserver-ha-920193-m02" [d05fff94-7b3f-4127-8164-3c1a838bb71c] Running
	I1209 22:51:00.967839   36778 system_pods.go:89] "kube-controller-manager-ha-920193" [4a411c41-cba8-4cb3-beec-fcd3f9a8ade1] Running
	I1209 22:51:00.967843   36778 system_pods.go:89] "kube-controller-manager-ha-920193-m02" [57181858-4c99-4afa-b94c-4ba320fc293d] Running
	I1209 22:51:00.967846   36778 system_pods.go:89] "kube-proxy-lntbt" [8eb5538d-7ac0-4949-b7f9-29d5530f4521] Running
	I1209 22:51:00.967849   36778 system_pods.go:89] "kube-proxy-r8nhm" [355570de-1c30-4aa2-a56c-a06639cc339c] Running
	I1209 22:51:00.967853   36778 system_pods.go:89] "kube-scheduler-ha-920193" [60b5be1e-f9fe-4c30-b435-7d321c484bc4] Running
	I1209 22:51:00.967856   36778 system_pods.go:89] "kube-scheduler-ha-920193-m02" [a35e79b6-6cf2-4669-95fb-c1f358c811d2] Running
	I1209 22:51:00.967859   36778 system_pods.go:89] "kube-vip-ha-920193" [4f238d3a-6219-4e4d-89a8-22f73def8440] Running
	I1209 22:51:00.967862   36778 system_pods.go:89] "kube-vip-ha-920193-m02" [0fe8360d-b418-47dd-bcce-a8d5a6c4c29d] Running
	I1209 22:51:00.967865   36778 system_pods.go:89] "storage-provisioner" [cb31984d-7602-406c-8c77-ce8571cdaa52] Running
	I1209 22:51:00.967872   36778 system_pods.go:126] duration metric: took 206.369849ms to wait for k8s-apps to be running ...
	I1209 22:51:00.967881   36778 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 22:51:00.967920   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:51:00.982635   36778 system_svc.go:56] duration metric: took 14.746001ms WaitForService to wait for kubelet
	I1209 22:51:00.982658   36778 kubeadm.go:582] duration metric: took 22.129061399s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 22:51:00.982676   36778 node_conditions.go:102] verifying NodePressure condition ...
	I1209 22:51:01.157065   36778 request.go:632] Waited for 174.324712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I1209 22:51:01.157132   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I1209 22:51:01.157137   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:01.157146   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:01.157150   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:01.161631   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:01.162406   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:51:01.162427   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:51:01.162443   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:51:01.162449   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:51:01.162454   36778 node_conditions.go:105] duration metric: took 179.774021ms to run NodePressure ...
	I1209 22:51:01.162470   36778 start.go:241] waiting for startup goroutines ...
	I1209 22:51:01.162500   36778 start.go:255] writing updated cluster config ...
	I1209 22:51:01.164529   36778 out.go:201] 
	I1209 22:51:01.165967   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:51:01.166048   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:51:01.167621   36778 out.go:177] * Starting "ha-920193-m03" control-plane node in "ha-920193" cluster
	I1209 22:51:01.168868   36778 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:51:01.168885   36778 cache.go:56] Caching tarball of preloaded images
	I1209 22:51:01.168992   36778 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 22:51:01.169010   36778 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 22:51:01.169110   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:51:01.169269   36778 start.go:360] acquireMachinesLock for ha-920193-m03: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 22:51:01.169312   36778 start.go:364] duration metric: took 23.987µs to acquireMachinesLock for "ha-920193-m03"
	I1209 22:51:01.169336   36778 start.go:93] Provisioning new machine with config: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:51:01.169433   36778 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1209 22:51:01.171416   36778 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 22:51:01.171522   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:51:01.171583   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:51:01.186366   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42027
	I1209 22:51:01.186874   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:51:01.187404   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:51:01.187428   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:51:01.187781   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:51:01.187979   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetMachineName
	I1209 22:51:01.188140   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:01.188306   36778 start.go:159] libmachine.API.Create for "ha-920193" (driver="kvm2")
	I1209 22:51:01.188339   36778 client.go:168] LocalClient.Create starting
	I1209 22:51:01.188376   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem
	I1209 22:51:01.188415   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:51:01.188430   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:51:01.188479   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem
	I1209 22:51:01.188497   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:51:01.188505   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:51:01.188519   36778 main.go:141] libmachine: Running pre-create checks...
	I1209 22:51:01.188524   36778 main.go:141] libmachine: (ha-920193-m03) Calling .PreCreateCheck
	I1209 22:51:01.188706   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetConfigRaw
	I1209 22:51:01.189120   36778 main.go:141] libmachine: Creating machine...
	I1209 22:51:01.189133   36778 main.go:141] libmachine: (ha-920193-m03) Calling .Create
	I1209 22:51:01.189263   36778 main.go:141] libmachine: (ha-920193-m03) Creating KVM machine...
	I1209 22:51:01.190619   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found existing default KVM network
	I1209 22:51:01.190780   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found existing private KVM network mk-ha-920193
	I1209 22:51:01.190893   36778 main.go:141] libmachine: (ha-920193-m03) Setting up store path in /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03 ...
	I1209 22:51:01.190907   36778 main.go:141] libmachine: (ha-920193-m03) Building disk image from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 22:51:01.191000   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:01.190898   37541 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:51:01.191087   36778 main.go:141] libmachine: (ha-920193-m03) Downloading /home/jenkins/minikube-integration/19888-18950/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 22:51:01.428399   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:01.428270   37541 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa...
	I1209 22:51:01.739906   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:01.739799   37541 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/ha-920193-m03.rawdisk...
	I1209 22:51:01.739933   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Writing magic tar header
	I1209 22:51:01.739943   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Writing SSH key tar header
	I1209 22:51:01.739951   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:01.739915   37541 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03 ...
	I1209 22:51:01.740035   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03
	I1209 22:51:01.740064   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03 (perms=drwx------)
	I1209 22:51:01.740080   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines
	I1209 22:51:01.740097   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:51:01.740107   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950
	I1209 22:51:01.740114   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines (perms=drwxr-xr-x)
	I1209 22:51:01.740127   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube (perms=drwxr-xr-x)
	I1209 22:51:01.740140   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950 (perms=drwxrwxr-x)
	I1209 22:51:01.740154   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 22:51:01.740167   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 22:51:01.740178   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins
	I1209 22:51:01.740189   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home
	I1209 22:51:01.740219   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 22:51:01.740244   36778 main.go:141] libmachine: (ha-920193-m03) Creating domain...
	I1209 22:51:01.740252   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Skipping /home - not owner
	I1209 22:51:01.741067   36778 main.go:141] libmachine: (ha-920193-m03) define libvirt domain using xml: 
	I1209 22:51:01.741086   36778 main.go:141] libmachine: (ha-920193-m03) <domain type='kvm'>
	I1209 22:51:01.741093   36778 main.go:141] libmachine: (ha-920193-m03)   <name>ha-920193-m03</name>
	I1209 22:51:01.741098   36778 main.go:141] libmachine: (ha-920193-m03)   <memory unit='MiB'>2200</memory>
	I1209 22:51:01.741103   36778 main.go:141] libmachine: (ha-920193-m03)   <vcpu>2</vcpu>
	I1209 22:51:01.741110   36778 main.go:141] libmachine: (ha-920193-m03)   <features>
	I1209 22:51:01.741115   36778 main.go:141] libmachine: (ha-920193-m03)     <acpi/>
	I1209 22:51:01.741119   36778 main.go:141] libmachine: (ha-920193-m03)     <apic/>
	I1209 22:51:01.741124   36778 main.go:141] libmachine: (ha-920193-m03)     <pae/>
	I1209 22:51:01.741128   36778 main.go:141] libmachine: (ha-920193-m03)     
	I1209 22:51:01.741133   36778 main.go:141] libmachine: (ha-920193-m03)   </features>
	I1209 22:51:01.741147   36778 main.go:141] libmachine: (ha-920193-m03)   <cpu mode='host-passthrough'>
	I1209 22:51:01.741152   36778 main.go:141] libmachine: (ha-920193-m03)   
	I1209 22:51:01.741157   36778 main.go:141] libmachine: (ha-920193-m03)   </cpu>
	I1209 22:51:01.741162   36778 main.go:141] libmachine: (ha-920193-m03)   <os>
	I1209 22:51:01.741166   36778 main.go:141] libmachine: (ha-920193-m03)     <type>hvm</type>
	I1209 22:51:01.741171   36778 main.go:141] libmachine: (ha-920193-m03)     <boot dev='cdrom'/>
	I1209 22:51:01.741176   36778 main.go:141] libmachine: (ha-920193-m03)     <boot dev='hd'/>
	I1209 22:51:01.741184   36778 main.go:141] libmachine: (ha-920193-m03)     <bootmenu enable='no'/>
	I1209 22:51:01.741188   36778 main.go:141] libmachine: (ha-920193-m03)   </os>
	I1209 22:51:01.741225   36778 main.go:141] libmachine: (ha-920193-m03)   <devices>
	I1209 22:51:01.741245   36778 main.go:141] libmachine: (ha-920193-m03)     <disk type='file' device='cdrom'>
	I1209 22:51:01.741288   36778 main.go:141] libmachine: (ha-920193-m03)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/boot2docker.iso'/>
	I1209 22:51:01.741325   36778 main.go:141] libmachine: (ha-920193-m03)       <target dev='hdc' bus='scsi'/>
	I1209 22:51:01.741339   36778 main.go:141] libmachine: (ha-920193-m03)       <readonly/>
	I1209 22:51:01.741350   36778 main.go:141] libmachine: (ha-920193-m03)     </disk>
	I1209 22:51:01.741361   36778 main.go:141] libmachine: (ha-920193-m03)     <disk type='file' device='disk'>
	I1209 22:51:01.741373   36778 main.go:141] libmachine: (ha-920193-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 22:51:01.741386   36778 main.go:141] libmachine: (ha-920193-m03)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/ha-920193-m03.rawdisk'/>
	I1209 22:51:01.741397   36778 main.go:141] libmachine: (ha-920193-m03)       <target dev='hda' bus='virtio'/>
	I1209 22:51:01.741408   36778 main.go:141] libmachine: (ha-920193-m03)     </disk>
	I1209 22:51:01.741418   36778 main.go:141] libmachine: (ha-920193-m03)     <interface type='network'>
	I1209 22:51:01.741429   36778 main.go:141] libmachine: (ha-920193-m03)       <source network='mk-ha-920193'/>
	I1209 22:51:01.741437   36778 main.go:141] libmachine: (ha-920193-m03)       <model type='virtio'/>
	I1209 22:51:01.741447   36778 main.go:141] libmachine: (ha-920193-m03)     </interface>
	I1209 22:51:01.741456   36778 main.go:141] libmachine: (ha-920193-m03)     <interface type='network'>
	I1209 22:51:01.741472   36778 main.go:141] libmachine: (ha-920193-m03)       <source network='default'/>
	I1209 22:51:01.741483   36778 main.go:141] libmachine: (ha-920193-m03)       <model type='virtio'/>
	I1209 22:51:01.741496   36778 main.go:141] libmachine: (ha-920193-m03)     </interface>
	I1209 22:51:01.741507   36778 main.go:141] libmachine: (ha-920193-m03)     <serial type='pty'>
	I1209 22:51:01.741516   36778 main.go:141] libmachine: (ha-920193-m03)       <target port='0'/>
	I1209 22:51:01.741525   36778 main.go:141] libmachine: (ha-920193-m03)     </serial>
	I1209 22:51:01.741534   36778 main.go:141] libmachine: (ha-920193-m03)     <console type='pty'>
	I1209 22:51:01.741544   36778 main.go:141] libmachine: (ha-920193-m03)       <target type='serial' port='0'/>
	I1209 22:51:01.741552   36778 main.go:141] libmachine: (ha-920193-m03)     </console>
	I1209 22:51:01.741566   36778 main.go:141] libmachine: (ha-920193-m03)     <rng model='virtio'>
	I1209 22:51:01.741580   36778 main.go:141] libmachine: (ha-920193-m03)       <backend model='random'>/dev/random</backend>
	I1209 22:51:01.741590   36778 main.go:141] libmachine: (ha-920193-m03)     </rng>
	I1209 22:51:01.741597   36778 main.go:141] libmachine: (ha-920193-m03)     
	I1209 22:51:01.741606   36778 main.go:141] libmachine: (ha-920193-m03)     
	I1209 22:51:01.741616   36778 main.go:141] libmachine: (ha-920193-m03)   </devices>
	I1209 22:51:01.741623   36778 main.go:141] libmachine: (ha-920193-m03) </domain>
	I1209 22:51:01.741635   36778 main.go:141] libmachine: (ha-920193-m03) 
	I1209 22:51:01.749628   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:ca:84:fc in network default
	I1209 22:51:01.750354   36778 main.go:141] libmachine: (ha-920193-m03) Ensuring networks are active...
	I1209 22:51:01.750395   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:01.751100   36778 main.go:141] libmachine: (ha-920193-m03) Ensuring network default is active
	I1209 22:51:01.751465   36778 main.go:141] libmachine: (ha-920193-m03) Ensuring network mk-ha-920193 is active
	I1209 22:51:01.751930   36778 main.go:141] libmachine: (ha-920193-m03) Getting domain xml...
	I1209 22:51:01.752802   36778 main.go:141] libmachine: (ha-920193-m03) Creating domain...
	I1209 22:51:03.003454   36778 main.go:141] libmachine: (ha-920193-m03) Waiting to get IP...
	I1209 22:51:03.004238   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:03.004647   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:03.004670   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:03.004626   37541 retry.go:31] will retry after 297.46379ms: waiting for machine to come up
	I1209 22:51:03.304151   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:03.304627   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:03.304651   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:03.304586   37541 retry.go:31] will retry after 341.743592ms: waiting for machine to come up
	I1209 22:51:03.648185   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:03.648648   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:03.648681   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:03.648610   37541 retry.go:31] will retry after 348.703343ms: waiting for machine to come up
	I1209 22:51:03.999220   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:03.999761   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:03.999783   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:03.999722   37541 retry.go:31] will retry after 471.208269ms: waiting for machine to come up
	I1209 22:51:04.473118   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:04.473644   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:04.473698   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:04.473622   37541 retry.go:31] will retry after 567.031016ms: waiting for machine to come up
	I1209 22:51:05.042388   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:05.042845   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:05.042890   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:05.042828   37541 retry.go:31] will retry after 635.422002ms: waiting for machine to come up
	I1209 22:51:05.679729   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:05.680179   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:05.680197   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:05.680151   37541 retry.go:31] will retry after 1.009913666s: waiting for machine to come up
	I1209 22:51:06.691434   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:06.692093   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:06.692115   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:06.692049   37541 retry.go:31] will retry after 1.22911274s: waiting for machine to come up
	I1209 22:51:07.923301   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:07.923871   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:07.923895   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:07.923821   37541 retry.go:31] will retry after 1.262587003s: waiting for machine to come up
	I1209 22:51:09.187598   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:09.188051   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:09.188081   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:09.188005   37541 retry.go:31] will retry after 2.033467764s: waiting for machine to come up
	I1209 22:51:11.223284   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:11.223845   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:11.223872   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:11.223795   37541 retry.go:31] will retry after 2.889234368s: waiting for machine to come up
	I1209 22:51:14.116824   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:14.117240   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:14.117262   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:14.117201   37541 retry.go:31] will retry after 2.84022101s: waiting for machine to come up
	I1209 22:51:16.958771   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:16.959194   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:16.959219   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:16.959151   37541 retry.go:31] will retry after 3.882632517s: waiting for machine to come up
	I1209 22:51:20.846163   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:20.846626   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:20.846647   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:20.846582   37541 retry.go:31] will retry after 4.879681656s: waiting for machine to come up
	I1209 22:51:25.727341   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.727988   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has current primary IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.728010   36778 main.go:141] libmachine: (ha-920193-m03) Found IP for machine: 192.168.39.45
	I1209 22:51:25.728024   36778 main.go:141] libmachine: (ha-920193-m03) Reserving static IP address...
	I1209 22:51:25.728426   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find host DHCP lease matching {name: "ha-920193-m03", mac: "52:54:00:50:0a:7f", ip: "192.168.39.45"} in network mk-ha-920193
	I1209 22:51:25.801758   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Getting to WaitForSSH function...
	I1209 22:51:25.801788   36778 main.go:141] libmachine: (ha-920193-m03) Reserved static IP address: 192.168.39.45
	I1209 22:51:25.801801   36778 main.go:141] libmachine: (ha-920193-m03) Waiting for SSH to be available...
	I1209 22:51:25.804862   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.805259   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:minikube Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:25.805292   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.805437   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Using SSH client type: external
	I1209 22:51:25.805466   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa (-rw-------)
	I1209 22:51:25.805497   36778 main.go:141] libmachine: (ha-920193-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.45 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 22:51:25.805521   36778 main.go:141] libmachine: (ha-920193-m03) DBG | About to run SSH command:
	I1209 22:51:25.805536   36778 main.go:141] libmachine: (ha-920193-m03) DBG | exit 0
	I1209 22:51:25.927825   36778 main.go:141] libmachine: (ha-920193-m03) DBG | SSH cmd err, output: <nil>: 
	I1209 22:51:25.928111   36778 main.go:141] libmachine: (ha-920193-m03) KVM machine creation complete!
	I1209 22:51:25.928439   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetConfigRaw
	I1209 22:51:25.928948   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:25.929144   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:25.929273   36778 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 22:51:25.929318   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetState
	I1209 22:51:25.930677   36778 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 22:51:25.930689   36778 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 22:51:25.930694   36778 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 22:51:25.930702   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:25.933545   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.933940   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:25.933962   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.934133   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:25.934287   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:25.934450   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:25.934592   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:25.934747   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:25.934964   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:25.934979   36778 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 22:51:26.038809   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:51:26.038831   36778 main.go:141] libmachine: Detecting the provisioner...
	I1209 22:51:26.038839   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.041686   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.041976   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.042008   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.042164   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.042336   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.042474   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.042609   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.042802   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:26.042955   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:26.042966   36778 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 22:51:26.148122   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 22:51:26.148211   36778 main.go:141] libmachine: found compatible host: buildroot
	I1209 22:51:26.148225   36778 main.go:141] libmachine: Provisioning with buildroot...
	I1209 22:51:26.148236   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetMachineName
	I1209 22:51:26.148529   36778 buildroot.go:166] provisioning hostname "ha-920193-m03"
	I1209 22:51:26.148558   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetMachineName
	I1209 22:51:26.148758   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.151543   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.151998   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.152027   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.152153   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.152318   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.152485   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.152628   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.152792   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:26.152967   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:26.152984   36778 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-920193-m03 && echo "ha-920193-m03" | sudo tee /etc/hostname
	I1209 22:51:26.273873   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-920193-m03
	
	I1209 22:51:26.273909   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.276949   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.277338   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.277363   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.277530   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.277710   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.277857   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.278009   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.278182   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:26.278378   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:26.278395   36778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-920193-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-920193-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-920193-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 22:51:26.396863   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:51:26.396892   36778 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 22:51:26.396911   36778 buildroot.go:174] setting up certificates
	I1209 22:51:26.396941   36778 provision.go:84] configureAuth start
	I1209 22:51:26.396969   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetMachineName
	I1209 22:51:26.397249   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetIP
	I1209 22:51:26.400060   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.400552   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.400587   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.400787   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.403205   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.403621   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.403649   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.403809   36778 provision.go:143] copyHostCerts
	I1209 22:51:26.403843   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:51:26.403883   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 22:51:26.403895   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:51:26.403963   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 22:51:26.404040   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:51:26.404057   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 22:51:26.404065   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:51:26.404088   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 22:51:26.404134   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:51:26.404151   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 22:51:26.404158   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:51:26.404179   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 22:51:26.404226   36778 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.ha-920193-m03 san=[127.0.0.1 192.168.39.45 ha-920193-m03 localhost minikube]
	I1209 22:51:26.742826   36778 provision.go:177] copyRemoteCerts
	I1209 22:51:26.742899   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 22:51:26.742929   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.745666   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.745993   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.746025   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.746168   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.746370   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.746525   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.746673   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa Username:docker}
	I1209 22:51:26.830893   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 22:51:26.830957   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 22:51:26.856889   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 22:51:26.856964   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 22:51:26.883482   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 22:51:26.883555   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 22:51:26.908478   36778 provision.go:87] duration metric: took 511.5225ms to configureAuth
	I1209 22:51:26.908504   36778 buildroot.go:189] setting minikube options for container-runtime
	I1209 22:51:26.908720   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:51:26.908806   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.911525   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.911882   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.911910   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.912106   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.912305   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.912470   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.912617   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.912830   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:26.913029   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:26.913046   36778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 22:51:27.123000   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 22:51:27.123030   36778 main.go:141] libmachine: Checking connection to Docker...
	I1209 22:51:27.123040   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetURL
	I1209 22:51:27.124367   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Using libvirt version 6000000
	I1209 22:51:27.126749   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.127125   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.127158   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.127291   36778 main.go:141] libmachine: Docker is up and running!
	I1209 22:51:27.127312   36778 main.go:141] libmachine: Reticulating splines...
	I1209 22:51:27.127327   36778 client.go:171] duration metric: took 25.938971166s to LocalClient.Create
	I1209 22:51:27.127361   36778 start.go:167] duration metric: took 25.939054874s to libmachine.API.Create "ha-920193"
	I1209 22:51:27.127375   36778 start.go:293] postStartSetup for "ha-920193-m03" (driver="kvm2")
	I1209 22:51:27.127391   36778 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 22:51:27.127417   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.127659   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 22:51:27.127685   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:27.130451   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.130869   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.130897   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.131187   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:27.131380   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.131593   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:27.131737   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa Username:docker}
	I1209 22:51:27.214943   36778 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 22:51:27.219203   36778 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 22:51:27.219230   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 22:51:27.219297   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 22:51:27.219368   36778 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 22:51:27.219377   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /etc/ssl/certs/262532.pem
	I1209 22:51:27.219454   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 22:51:27.229647   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:51:27.256219   36778 start.go:296] duration metric: took 128.828108ms for postStartSetup
	I1209 22:51:27.256272   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetConfigRaw
	I1209 22:51:27.256939   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetIP
	I1209 22:51:27.259520   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.259847   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.259871   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.260187   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:51:27.260393   36778 start.go:128] duration metric: took 26.090950019s to createHost
	I1209 22:51:27.260418   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:27.262865   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.263234   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.263258   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.263424   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:27.263637   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.263812   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.263948   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:27.264111   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:27.264266   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:27.264276   36778 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 22:51:27.367958   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733784687.346724594
	
	I1209 22:51:27.367980   36778 fix.go:216] guest clock: 1733784687.346724594
	I1209 22:51:27.367990   36778 fix.go:229] Guest: 2024-12-09 22:51:27.346724594 +0000 UTC Remote: 2024-12-09 22:51:27.260405928 +0000 UTC m=+144.153092475 (delta=86.318666ms)
	I1209 22:51:27.368010   36778 fix.go:200] guest clock delta is within tolerance: 86.318666ms
	I1209 22:51:27.368017   36778 start.go:83] releasing machines lock for "ha-920193-m03", held for 26.19869273s
	I1209 22:51:27.368043   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.368295   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetIP
	I1209 22:51:27.370584   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.370886   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.370925   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.372694   36778 out.go:177] * Found network options:
	I1209 22:51:27.373916   36778 out.go:177]   - NO_PROXY=192.168.39.102,192.168.39.43
	W1209 22:51:27.375001   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	W1209 22:51:27.375023   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 22:51:27.375036   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.375488   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.375695   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.375813   36778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 22:51:27.375854   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	W1209 22:51:27.375861   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	W1209 22:51:27.375898   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 22:51:27.375979   36778 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 22:51:27.376001   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:27.378647   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.378715   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.378991   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.379016   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.379059   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.379077   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.379200   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:27.379345   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:27.379350   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.379608   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:27.379611   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.379810   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:27.379814   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa Username:docker}
	I1209 22:51:27.379979   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa Username:docker}
	I1209 22:51:27.613722   36778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 22:51:27.619553   36778 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 22:51:27.619634   36778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 22:51:27.635746   36778 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 22:51:27.635772   36778 start.go:495] detecting cgroup driver to use...
	I1209 22:51:27.635826   36778 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 22:51:27.653845   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 22:51:27.668792   36778 docker.go:217] disabling cri-docker service (if available) ...
	I1209 22:51:27.668852   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 22:51:27.683547   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 22:51:27.698233   36778 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 22:51:27.824917   36778 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 22:51:27.972308   36778 docker.go:233] disabling docker service ...
	I1209 22:51:27.972387   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 22:51:27.987195   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 22:51:28.000581   36778 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 22:51:28.137925   36778 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 22:51:28.271243   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 22:51:28.285221   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 22:51:28.303416   36778 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 22:51:28.303486   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.314415   36778 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 22:51:28.314487   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.324832   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.336511   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.346899   36778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 22:51:28.358193   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.368602   36778 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.386409   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.397070   36778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 22:51:28.406418   36778 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 22:51:28.406478   36778 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 22:51:28.419010   36778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 22:51:28.428601   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:51:28.547013   36778 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 22:51:28.639590   36778 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 22:51:28.639672   36778 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 22:51:28.644400   36778 start.go:563] Will wait 60s for crictl version
	I1209 22:51:28.644447   36778 ssh_runner.go:195] Run: which crictl
	I1209 22:51:28.648450   36778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 22:51:28.685819   36778 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 22:51:28.685915   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:51:28.713055   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:51:28.743093   36778 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 22:51:28.744486   36778 out.go:177]   - env NO_PROXY=192.168.39.102
	I1209 22:51:28.745701   36778 out.go:177]   - env NO_PROXY=192.168.39.102,192.168.39.43
	I1209 22:51:28.746682   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetIP
	I1209 22:51:28.749397   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:28.749762   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:28.749786   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:28.749968   36778 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 22:51:28.754027   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:51:28.765381   36778 mustload.go:65] Loading cluster: ha-920193
	I1209 22:51:28.765606   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:51:28.765871   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:51:28.765916   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:51:28.781482   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44651
	I1209 22:51:28.781893   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:51:28.782266   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:51:28.782287   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:51:28.782526   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:51:28.782726   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:51:28.784149   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:51:28.784420   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:51:28.784463   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:51:28.799758   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I1209 22:51:28.800232   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:51:28.800726   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:51:28.800752   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:51:28.801514   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:51:28.801709   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:51:28.801891   36778 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193 for IP: 192.168.39.45
	I1209 22:51:28.801903   36778 certs.go:194] generating shared ca certs ...
	I1209 22:51:28.801923   36778 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:51:28.802065   36778 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 22:51:28.802119   36778 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 22:51:28.802134   36778 certs.go:256] generating profile certs ...
	I1209 22:51:28.802225   36778 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key
	I1209 22:51:28.802259   36778 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.e2d8c66a
	I1209 22:51:28.802283   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.e2d8c66a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.43 192.168.39.45 192.168.39.254]
	I1209 22:51:28.918029   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.e2d8c66a ...
	I1209 22:51:28.918070   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.e2d8c66a: {Name:mkb9baad787ad98ea3bbef921d1279904d63e258 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:51:28.918300   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.e2d8c66a ...
	I1209 22:51:28.918321   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.e2d8c66a: {Name:mk6d0bc06f9a231b982576741314205a71ae81f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:51:28.918454   36778 certs.go:381] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.e2d8c66a -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt
	I1209 22:51:28.918653   36778 certs.go:385] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.e2d8c66a -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key
	I1209 22:51:28.918832   36778 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key
	I1209 22:51:28.918852   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 22:51:28.918869   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 22:51:28.918882   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 22:51:28.918897   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 22:51:28.918909   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 22:51:28.918920   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 22:51:28.918930   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 22:51:28.918940   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 22:51:28.918992   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 22:51:28.919020   36778 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 22:51:28.919030   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 22:51:28.919050   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 22:51:28.919071   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 22:51:28.919092   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 22:51:28.919165   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:51:28.919200   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem -> /usr/share/ca-certificates/26253.pem
	I1209 22:51:28.919214   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /usr/share/ca-certificates/262532.pem
	I1209 22:51:28.919226   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:51:28.919256   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:51:28.922496   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:51:28.922907   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:51:28.922924   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:51:28.923121   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:51:28.923334   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:51:28.923493   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:51:28.923637   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:51:28.995976   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1209 22:51:29.001595   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1209 22:51:29.014651   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1209 22:51:29.018976   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1209 22:51:29.031698   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1209 22:51:29.035774   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1209 22:51:29.047740   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1209 22:51:29.055239   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1209 22:51:29.068897   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1209 22:51:29.073278   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1209 22:51:29.083471   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1209 22:51:29.087771   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1209 22:51:29.099200   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 22:51:29.124484   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 22:51:29.146898   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 22:51:29.170925   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 22:51:29.194172   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1209 22:51:29.216851   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 22:51:29.238922   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 22:51:29.261472   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 22:51:29.285294   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 22:51:29.308795   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 22:51:29.332153   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 22:51:29.356878   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1209 22:51:29.373363   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1209 22:51:29.389889   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1209 22:51:29.406229   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1209 22:51:29.422321   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1209 22:51:29.439481   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1209 22:51:29.457534   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1209 22:51:29.474790   36778 ssh_runner.go:195] Run: openssl version
	I1209 22:51:29.480386   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 22:51:29.491491   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 22:51:29.496002   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 22:51:29.496065   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 22:51:29.501912   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 22:51:29.512683   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 22:51:29.523589   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:51:29.527903   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:51:29.527953   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:51:29.533408   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 22:51:29.544241   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 22:51:29.554741   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 22:51:29.559538   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 22:51:29.559622   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 22:51:29.565390   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 22:51:29.576363   36778 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 22:51:29.580324   36778 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 22:51:29.580397   36778 kubeadm.go:934] updating node {m03 192.168.39.45 8443 v1.31.2 crio true true} ...
	I1209 22:51:29.580506   36778 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-920193-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 22:51:29.580552   36778 kube-vip.go:115] generating kube-vip config ...
	I1209 22:51:29.580597   36778 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 22:51:29.601123   36778 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 22:51:29.601198   36778 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 22:51:29.601245   36778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 22:51:29.616816   36778 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1209 22:51:29.616873   36778 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1209 22:51:29.626547   36778 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1209 22:51:29.626551   36778 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1209 22:51:29.626581   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 22:51:29.626608   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 22:51:29.626662   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 22:51:29.626551   36778 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1209 22:51:29.626680   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 22:51:29.626713   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:51:29.630710   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1209 22:51:29.630743   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1209 22:51:29.661909   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 22:51:29.661957   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1209 22:51:29.661993   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1209 22:51:29.662034   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 22:51:29.693387   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1209 22:51:29.693423   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1209 22:51:30.497307   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1209 22:51:30.507919   36778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 22:51:30.525676   36778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 22:51:30.544107   36778 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 22:51:30.560963   36778 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 22:51:30.564949   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:51:30.577803   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:51:30.711834   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:51:30.729249   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:51:30.729790   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:51:30.729852   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:51:30.745894   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40813
	I1209 22:51:30.746400   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:51:30.746903   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:51:30.746923   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:51:30.747244   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:51:30.747474   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:51:30.747637   36778 start.go:317] joinCluster: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:51:30.747751   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1209 22:51:30.747772   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:51:30.750739   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:51:30.751188   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:51:30.751212   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:51:30.751382   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:51:30.751610   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:51:30.751784   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:51:30.751955   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:51:30.921112   36778 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:51:30.921184   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uin3tt.yfutths3ueks8fx7 --discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-920193-m03 --control-plane --apiserver-advertise-address=192.168.39.45 --apiserver-bind-port=8443"
	I1209 22:51:51.979391   36778 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uin3tt.yfutths3ueks8fx7 --discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-920193-m03 --control-plane --apiserver-advertise-address=192.168.39.45 --apiserver-bind-port=8443": (21.05816353s)
	I1209 22:51:51.979426   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1209 22:51:52.687851   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-920193-m03 minikube.k8s.io/updated_at=2024_12_09T22_51_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=ha-920193 minikube.k8s.io/primary=false
	I1209 22:51:52.803074   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-920193-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1209 22:51:52.923717   36778 start.go:319] duration metric: took 22.176073752s to joinCluster
	I1209 22:51:52.923810   36778 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:51:52.924248   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:51:52.925117   36778 out.go:177] * Verifying Kubernetes components...
	I1209 22:51:52.927170   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:51:53.166362   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:51:53.186053   36778 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:51:53.186348   36778 kapi.go:59] client config for ha-920193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key", CAFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c9a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1209 22:51:53.186424   36778 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I1209 22:51:53.186669   36778 node_ready.go:35] waiting up to 6m0s for node "ha-920193-m03" to be "Ready" ...
	I1209 22:51:53.186744   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:53.186755   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:53.186774   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:53.186786   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:53.191049   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:53.686961   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:53.686986   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:53.686997   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:53.687007   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:53.691244   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:54.186985   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:54.187011   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:54.187024   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:54.187030   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:54.265267   36778 round_trippers.go:574] Response Status: 200 OK in 78 milliseconds
	I1209 22:51:54.687008   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:54.687031   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:54.687042   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:54.687050   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:54.690480   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:55.187500   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:55.187525   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:55.187535   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:55.187540   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:55.191178   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:55.191830   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:51:55.687762   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:55.687790   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:55.687802   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:55.687832   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:55.691762   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:56.187494   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:56.187516   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:56.187534   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:56.187543   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:56.191706   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:56.687665   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:56.687691   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:56.687700   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:56.687705   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:56.690707   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:51:57.187710   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:57.187731   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:57.187739   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:57.187743   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:57.191208   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:57.192244   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:51:57.687242   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:57.687266   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:57.687277   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:57.687284   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:57.692231   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:58.187334   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:58.187369   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:58.187404   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:58.187410   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:58.190420   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:51:58.687040   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:58.687060   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:58.687087   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:58.687092   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:58.690458   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:59.187542   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:59.187579   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:59.187590   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:59.187598   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:59.191084   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:59.687057   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:59.687079   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:59.687087   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:59.687090   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:59.762365   36778 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I1209 22:51:59.763672   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:00.187782   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:00.187809   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:00.187824   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:00.187830   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:00.190992   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:00.687396   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:00.687424   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:00.687436   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:00.687443   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:00.690509   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:01.187706   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:01.187726   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:01.187735   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:01.187738   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:01.191284   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:01.687807   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:01.687830   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:01.687838   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:01.687841   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:01.692246   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:02.187139   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:02.187164   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:02.187172   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:02.187176   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:02.191262   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:02.191900   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:02.687239   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:02.687260   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:02.687268   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:02.687272   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:02.690588   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:03.186879   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:03.186901   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:03.186909   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:03.186913   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:03.190077   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:03.686945   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:03.686970   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:03.686976   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:03.686980   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:03.690246   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:04.187422   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:04.187453   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:04.187461   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:04.187475   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:04.190833   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:04.686862   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:04.686888   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:04.686895   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:04.686899   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:04.690474   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:04.691179   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:05.187647   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:05.187672   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:05.187680   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:05.187686   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:05.191042   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:05.687592   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:05.687619   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:05.687631   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:05.687638   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:05.695966   36778 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1209 22:52:06.187585   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:06.187617   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:06.187624   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:06.187627   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:06.190871   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:06.687343   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:06.687365   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:06.687372   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:06.687376   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:06.691065   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:06.691740   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:07.186885   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:07.186908   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:07.186916   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:07.186920   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:07.190452   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:07.687481   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:07.687506   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:07.687517   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:07.687522   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:07.690781   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:08.187842   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:08.187865   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:08.187873   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:08.187877   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:08.190745   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:52:08.687010   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:08.687039   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:08.687047   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:08.687050   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:08.690129   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:09.187057   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:09.187082   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:09.187100   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:09.187105   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:09.190445   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:09.191229   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:09.687849   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:09.687877   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:09.687887   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:09.687896   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:09.691161   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:10.187009   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:10.187030   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:10.187038   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:10.187041   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:10.190809   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:10.687323   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:10.687345   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:10.687353   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:10.687356   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:10.690476   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.187726   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:11.187753   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.187765   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.187771   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.190528   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:52:11.191296   36778 node_ready.go:49] node "ha-920193-m03" has status "Ready":"True"
	I1209 22:52:11.191322   36778 node_ready.go:38] duration metric: took 18.004635224s for node "ha-920193-m03" to be "Ready" ...
	I1209 22:52:11.191347   36778 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:52:11.191433   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:11.191446   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.191457   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.191463   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.197370   36778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 22:52:11.208757   36778 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.208877   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9792g
	I1209 22:52:11.208889   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.208900   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.208908   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.213394   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:11.214171   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.214187   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.214197   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.214204   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.217611   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.218273   36778 pod_ready.go:93] pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.218301   36778 pod_ready.go:82] duration metric: took 9.507458ms for pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.218314   36778 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.218394   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pftgv
	I1209 22:52:11.218405   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.218415   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.218420   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.221934   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.223013   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.223030   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.223037   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.223041   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.226045   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:52:11.226613   36778 pod_ready.go:93] pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.226633   36778 pod_ready.go:82] duration metric: took 8.310101ms for pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.226645   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.226713   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193
	I1209 22:52:11.226722   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.226729   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.226736   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.232210   36778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 22:52:11.233134   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.233148   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.233156   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.233159   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.236922   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.237775   36778 pod_ready.go:93] pod "etcd-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.237796   36778 pod_ready.go:82] duration metric: took 11.143234ms for pod "etcd-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.237806   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.237867   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193-m02
	I1209 22:52:11.237875   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.237882   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.237887   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.242036   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:11.242839   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:11.242858   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.242869   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.242877   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.246444   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.247204   36778 pod_ready.go:93] pod "etcd-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.247221   36778 pod_ready.go:82] duration metric: took 9.409944ms for pod "etcd-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.247231   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.388592   36778 request.go:632] Waited for 141.281694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193-m03
	I1209 22:52:11.388678   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193-m03
	I1209 22:52:11.388690   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.388704   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.388713   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.392012   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.587869   36778 request.go:632] Waited for 195.273739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:11.587951   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:11.587957   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.587964   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.587968   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.591423   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.592154   36778 pod_ready.go:93] pod "etcd-ha-920193-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.592174   36778 pod_ready.go:82] duration metric: took 344.933564ms for pod "etcd-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.592194   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.788563   36778 request.go:632] Waited for 196.298723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193
	I1209 22:52:11.788656   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193
	I1209 22:52:11.788669   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.788679   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.788687   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.792940   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:11.988037   36778 request.go:632] Waited for 194.354692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.988107   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.988113   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.988121   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.988125   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.992370   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:11.992995   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.993012   36778 pod_ready.go:82] duration metric: took 400.807496ms for pod "kube-apiserver-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.993021   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.188095   36778 request.go:632] Waited for 195.006713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m02
	I1209 22:52:12.188167   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m02
	I1209 22:52:12.188172   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.188180   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.188185   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.191780   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:12.388747   36778 request.go:632] Waited for 196.170639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:12.388823   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:12.388829   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.388856   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.388869   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.392301   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:12.392894   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:12.392921   36778 pod_ready.go:82] duration metric: took 399.892746ms for pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.392938   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.587836   36778 request.go:632] Waited for 194.810311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m03
	I1209 22:52:12.587925   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m03
	I1209 22:52:12.587934   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.587948   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.587958   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.591021   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:12.787947   36778 request.go:632] Waited for 196.297135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:12.788010   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:12.788016   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.788024   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.788032   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.791450   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:12.792173   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:12.792194   36778 pod_ready.go:82] duration metric: took 399.248841ms for pod "kube-apiserver-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.792210   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.988330   36778 request.go:632] Waited for 196.053217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193
	I1209 22:52:12.988409   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193
	I1209 22:52:12.988415   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.988423   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.988428   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.992155   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.188272   36778 request.go:632] Waited for 195.156662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:13.188340   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:13.188346   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.188354   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.188362   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.192008   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.192630   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:13.192650   36778 pod_ready.go:82] duration metric: took 400.432601ms for pod "kube-controller-manager-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.192661   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.388559   36778 request.go:632] Waited for 195.821537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m02
	I1209 22:52:13.388616   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m02
	I1209 22:52:13.388621   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.388629   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.388634   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.391883   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.587935   36778 request.go:632] Waited for 195.28191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:13.587989   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:13.587994   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.588007   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.588010   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.591630   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.592151   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:13.592169   36778 pod_ready.go:82] duration metric: took 399.499137ms for pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.592180   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.788332   36778 request.go:632] Waited for 196.084844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m03
	I1209 22:52:13.788412   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m03
	I1209 22:52:13.788419   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.788429   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.788435   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.792121   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.988484   36778 request.go:632] Waited for 195.461528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:13.988555   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:13.988567   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.988579   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.988589   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.992243   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.992809   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:13.992827   36778 pod_ready.go:82] duration metric: took 400.64066ms for pod "kube-controller-manager-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.992842   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lntbt" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.187961   36778 request.go:632] Waited for 195.049639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lntbt
	I1209 22:52:14.188050   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lntbt
	I1209 22:52:14.188058   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.188071   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.188080   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.191692   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:14.388730   36778 request.go:632] Waited for 196.239352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:14.388788   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:14.388802   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.388813   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.388817   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.392311   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:14.392971   36778 pod_ready.go:93] pod "kube-proxy-lntbt" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:14.392992   36778 pod_ready.go:82] duration metric: took 400.138793ms for pod "kube-proxy-lntbt" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.393007   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pr7zk" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.588013   36778 request.go:632] Waited for 194.93384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pr7zk
	I1209 22:52:14.588077   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pr7zk
	I1209 22:52:14.588082   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.588095   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.588102   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.591447   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:14.788698   36778 request.go:632] Waited for 196.390033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:14.788766   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:14.788775   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.788787   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.788800   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.792338   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:14.793156   36778 pod_ready.go:93] pod "kube-proxy-pr7zk" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:14.793181   36778 pod_ready.go:82] duration metric: took 400.165156ms for pod "kube-proxy-pr7zk" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.793195   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r8nhm" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.988348   36778 request.go:632] Waited for 195.014123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r8nhm
	I1209 22:52:14.988427   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r8nhm
	I1209 22:52:14.988434   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.988444   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.988457   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.993239   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:15.188292   36778 request.go:632] Waited for 194.264701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:15.188390   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:15.188403   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.188418   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.188429   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.192041   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.192565   36778 pod_ready.go:93] pod "kube-proxy-r8nhm" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:15.192584   36778 pod_ready.go:82] duration metric: took 399.381952ms for pod "kube-proxy-r8nhm" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.192595   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.388147   36778 request.go:632] Waited for 195.488765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193
	I1209 22:52:15.388224   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193
	I1209 22:52:15.388233   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.388240   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.388248   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.391603   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.588758   36778 request.go:632] Waited for 196.3144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:15.588837   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:15.588843   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.588850   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.588860   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.592681   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.593301   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:15.593327   36778 pod_ready.go:82] duration metric: took 400.724982ms for pod "kube-scheduler-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.593343   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.788627   36778 request.go:632] Waited for 195.204455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m02
	I1209 22:52:15.788686   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m02
	I1209 22:52:15.788691   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.788699   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.788704   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.792349   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.988329   36778 request.go:632] Waited for 195.36216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:15.988396   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:15.988402   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.988408   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.988412   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.991578   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.992400   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:15.992418   36778 pod_ready.go:82] duration metric: took 399.067203ms for pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.992428   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:16.188427   36778 request.go:632] Waited for 195.939633ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m03
	I1209 22:52:16.188480   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m03
	I1209 22:52:16.188489   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.188496   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.188501   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.192012   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:16.388006   36778 request.go:632] Waited for 195.368293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:16.388057   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:16.388062   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.388069   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.388073   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.392950   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:16.393391   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:16.393409   36778 pod_ready.go:82] duration metric: took 400.975145ms for pod "kube-scheduler-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:16.393420   36778 pod_ready.go:39] duration metric: took 5.202056835s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:52:16.393435   36778 api_server.go:52] waiting for apiserver process to appear ...
	I1209 22:52:16.393482   36778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 22:52:16.409725   36778 api_server.go:72] duration metric: took 23.485873684s to wait for apiserver process to appear ...
	I1209 22:52:16.409759   36778 api_server.go:88] waiting for apiserver healthz status ...
	I1209 22:52:16.409786   36778 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I1209 22:52:16.414224   36778 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I1209 22:52:16.414307   36778 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I1209 22:52:16.414316   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.414324   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.414330   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.415229   36778 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1209 22:52:16.415280   36778 api_server.go:141] control plane version: v1.31.2
	I1209 22:52:16.415291   36778 api_server.go:131] duration metric: took 5.527187ms to wait for apiserver health ...
	I1209 22:52:16.415298   36778 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 22:52:16.588740   36778 request.go:632] Waited for 173.378808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:16.588806   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:16.588811   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.588818   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.588822   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.595459   36778 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 22:52:16.602952   36778 system_pods.go:59] 24 kube-system pods found
	I1209 22:52:16.602979   36778 system_pods.go:61] "coredns-7c65d6cfc9-9792g" [61c6326d-4d13-49b7-9fae-c4f8906c3c7e] Running
	I1209 22:52:16.602985   36778 system_pods.go:61] "coredns-7c65d6cfc9-pftgv" [eb45cffe-b666-435b-ada3-605735ee43c1] Running
	I1209 22:52:16.602989   36778 system_pods.go:61] "etcd-ha-920193" [70c28c05-bf86-4a56-9863-0e836de39230] Running
	I1209 22:52:16.602993   36778 system_pods.go:61] "etcd-ha-920193-m02" [b1746065-266b-487c-8aa5-629d5b421a3b] Running
	I1209 22:52:16.602996   36778 system_pods.go:61] "etcd-ha-920193-m03" [f9f5fa3d-3d06-47a8-b3f2-ed544b1cfb74] Running
	I1209 22:52:16.603001   36778 system_pods.go:61] "kindnet-7bbbc" [76f14f4c-3759-45b3-8161-78d85c44b5a6] Running
	I1209 22:52:16.603004   36778 system_pods.go:61] "kindnet-drj9q" [662d116b-e7ec-437c-9eeb-207cee5beecd] Running
	I1209 22:52:16.603007   36778 system_pods.go:61] "kindnet-rcctv" [e9580e78-cfcb-45b6-9e90-f37ce3881c6b] Running
	I1209 22:52:16.603010   36778 system_pods.go:61] "kube-apiserver-ha-920193" [9c15e33e-f7c1-4b0b-84de-718043e0ea1b] Running
	I1209 22:52:16.603015   36778 system_pods.go:61] "kube-apiserver-ha-920193-m02" [d05fff94-7b3f-4127-8164-3c1a838bb71c] Running
	I1209 22:52:16.603018   36778 system_pods.go:61] "kube-apiserver-ha-920193-m03" [1a55c551-c7bf-4b9f-ac49-e750cec2a92f] Running
	I1209 22:52:16.603022   36778 system_pods.go:61] "kube-controller-manager-ha-920193" [4a411c41-cba8-4cb3-beec-fcd3f9a8ade1] Running
	I1209 22:52:16.603026   36778 system_pods.go:61] "kube-controller-manager-ha-920193-m02" [57181858-4c99-4afa-b94c-4ba320fc293d] Running
	I1209 22:52:16.603031   36778 system_pods.go:61] "kube-controller-manager-ha-920193-m03" [d5278ea9-8878-4c18-bb6a-abb82a5bde12] Running
	I1209 22:52:16.603035   36778 system_pods.go:61] "kube-proxy-lntbt" [8eb5538d-7ac0-4949-b7f9-29d5530f4521] Running
	I1209 22:52:16.603038   36778 system_pods.go:61] "kube-proxy-pr7zk" [8236ddcf-f836-449c-8401-f799dd1a94f8] Running
	I1209 22:52:16.603041   36778 system_pods.go:61] "kube-proxy-r8nhm" [355570de-1c30-4aa2-a56c-a06639cc339c] Running
	I1209 22:52:16.603044   36778 system_pods.go:61] "kube-scheduler-ha-920193" [60b5be1e-f9fe-4c30-b435-7d321c484bc4] Running
	I1209 22:52:16.603047   36778 system_pods.go:61] "kube-scheduler-ha-920193-m02" [a35e79b6-6cf2-4669-95fb-c1f358c811d2] Running
	I1209 22:52:16.603050   36778 system_pods.go:61] "kube-scheduler-ha-920193-m03" [27607c05-10b3-4d82-ba5e-2f3e7a44d2ad] Running
	I1209 22:52:16.603054   36778 system_pods.go:61] "kube-vip-ha-920193" [4f238d3a-6219-4e4d-89a8-22f73def8440] Running
	I1209 22:52:16.603057   36778 system_pods.go:61] "kube-vip-ha-920193-m02" [0fe8360d-b418-47dd-bcce-a8d5a6c4c29d] Running
	I1209 22:52:16.603060   36778 system_pods.go:61] "kube-vip-ha-920193-m03" [7a747c56-20d6-41bb-b7c1-1db054ee699c] Running
	I1209 22:52:16.603062   36778 system_pods.go:61] "storage-provisioner" [cb31984d-7602-406c-8c77-ce8571cdaa52] Running
	I1209 22:52:16.603068   36778 system_pods.go:74] duration metric: took 187.765008ms to wait for pod list to return data ...
	I1209 22:52:16.603077   36778 default_sa.go:34] waiting for default service account to be created ...
	I1209 22:52:16.788510   36778 request.go:632] Waited for 185.359314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1209 22:52:16.788566   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1209 22:52:16.788571   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.788579   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.788586   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.791991   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:16.792139   36778 default_sa.go:45] found service account: "default"
	I1209 22:52:16.792154   36778 default_sa.go:55] duration metric: took 189.072143ms for default service account to be created ...
	I1209 22:52:16.792164   36778 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 22:52:16.988637   36778 request.go:632] Waited for 196.396881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:16.988723   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:16.988732   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.988740   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.988743   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.995659   36778 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 22:52:17.002627   36778 system_pods.go:86] 24 kube-system pods found
	I1209 22:52:17.002660   36778 system_pods.go:89] "coredns-7c65d6cfc9-9792g" [61c6326d-4d13-49b7-9fae-c4f8906c3c7e] Running
	I1209 22:52:17.002667   36778 system_pods.go:89] "coredns-7c65d6cfc9-pftgv" [eb45cffe-b666-435b-ada3-605735ee43c1] Running
	I1209 22:52:17.002672   36778 system_pods.go:89] "etcd-ha-920193" [70c28c05-bf86-4a56-9863-0e836de39230] Running
	I1209 22:52:17.002676   36778 system_pods.go:89] "etcd-ha-920193-m02" [b1746065-266b-487c-8aa5-629d5b421a3b] Running
	I1209 22:52:17.002679   36778 system_pods.go:89] "etcd-ha-920193-m03" [f9f5fa3d-3d06-47a8-b3f2-ed544b1cfb74] Running
	I1209 22:52:17.002683   36778 system_pods.go:89] "kindnet-7bbbc" [76f14f4c-3759-45b3-8161-78d85c44b5a6] Running
	I1209 22:52:17.002686   36778 system_pods.go:89] "kindnet-drj9q" [662d116b-e7ec-437c-9eeb-207cee5beecd] Running
	I1209 22:52:17.002690   36778 system_pods.go:89] "kindnet-rcctv" [e9580e78-cfcb-45b6-9e90-f37ce3881c6b] Running
	I1209 22:52:17.002693   36778 system_pods.go:89] "kube-apiserver-ha-920193" [9c15e33e-f7c1-4b0b-84de-718043e0ea1b] Running
	I1209 22:52:17.002697   36778 system_pods.go:89] "kube-apiserver-ha-920193-m02" [d05fff94-7b3f-4127-8164-3c1a838bb71c] Running
	I1209 22:52:17.002700   36778 system_pods.go:89] "kube-apiserver-ha-920193-m03" [1a55c551-c7bf-4b9f-ac49-e750cec2a92f] Running
	I1209 22:52:17.002703   36778 system_pods.go:89] "kube-controller-manager-ha-920193" [4a411c41-cba8-4cb3-beec-fcd3f9a8ade1] Running
	I1209 22:52:17.002707   36778 system_pods.go:89] "kube-controller-manager-ha-920193-m02" [57181858-4c99-4afa-b94c-4ba320fc293d] Running
	I1209 22:52:17.002710   36778 system_pods.go:89] "kube-controller-manager-ha-920193-m03" [d5278ea9-8878-4c18-bb6a-abb82a5bde12] Running
	I1209 22:52:17.002717   36778 system_pods.go:89] "kube-proxy-lntbt" [8eb5538d-7ac0-4949-b7f9-29d5530f4521] Running
	I1209 22:52:17.002720   36778 system_pods.go:89] "kube-proxy-pr7zk" [8236ddcf-f836-449c-8401-f799dd1a94f8] Running
	I1209 22:52:17.002723   36778 system_pods.go:89] "kube-proxy-r8nhm" [355570de-1c30-4aa2-a56c-a06639cc339c] Running
	I1209 22:52:17.002726   36778 system_pods.go:89] "kube-scheduler-ha-920193" [60b5be1e-f9fe-4c30-b435-7d321c484bc4] Running
	I1209 22:52:17.002730   36778 system_pods.go:89] "kube-scheduler-ha-920193-m02" [a35e79b6-6cf2-4669-95fb-c1f358c811d2] Running
	I1209 22:52:17.002734   36778 system_pods.go:89] "kube-scheduler-ha-920193-m03" [27607c05-10b3-4d82-ba5e-2f3e7a44d2ad] Running
	I1209 22:52:17.002738   36778 system_pods.go:89] "kube-vip-ha-920193" [4f238d3a-6219-4e4d-89a8-22f73def8440] Running
	I1209 22:52:17.002740   36778 system_pods.go:89] "kube-vip-ha-920193-m02" [0fe8360d-b418-47dd-bcce-a8d5a6c4c29d] Running
	I1209 22:52:17.002744   36778 system_pods.go:89] "kube-vip-ha-920193-m03" [7a747c56-20d6-41bb-b7c1-1db054ee699c] Running
	I1209 22:52:17.002747   36778 system_pods.go:89] "storage-provisioner" [cb31984d-7602-406c-8c77-ce8571cdaa52] Running
	I1209 22:52:17.002753   36778 system_pods.go:126] duration metric: took 210.583954ms to wait for k8s-apps to be running ...
	I1209 22:52:17.002760   36778 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 22:52:17.002802   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:52:17.018265   36778 system_svc.go:56] duration metric: took 15.492212ms WaitForService to wait for kubelet
	I1209 22:52:17.018301   36778 kubeadm.go:582] duration metric: took 24.09445385s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 22:52:17.018323   36778 node_conditions.go:102] verifying NodePressure condition ...
	I1209 22:52:17.188743   36778 request.go:632] Waited for 170.323133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I1209 22:52:17.188800   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I1209 22:52:17.188807   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:17.188816   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:17.188823   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:17.193008   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:17.194620   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:52:17.194642   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:52:17.194653   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:52:17.194657   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:52:17.194661   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:52:17.194664   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:52:17.194668   36778 node_conditions.go:105] duration metric: took 176.339707ms to run NodePressure ...
	I1209 22:52:17.194678   36778 start.go:241] waiting for startup goroutines ...
	I1209 22:52:17.194700   36778 start.go:255] writing updated cluster config ...
	I1209 22:52:17.194994   36778 ssh_runner.go:195] Run: rm -f paused
	I1209 22:52:17.247192   36778 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 22:52:17.250117   36778 out.go:177] * Done! kubectl is now configured to use "ha-920193" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.750571215Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784968750544580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71c68bf9-a894-48ba-a16a-a87694a098e5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.751085862Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9656711-bfae-4cb7-b04a-1960eb0798ac name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.751142672Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9656711-bfae-4cb7-b04a-1960eb0798ac name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.751387559Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2098445c3438f8b0d5a49c50ab807d4692815a97d8732b20b47c76ff9381a76,PodSandboxId:32c399f593c298fd59b6593cd718d6d5d6899cacbe2ca9a451eae0cce6586d32,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733784740856215420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4dbs2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 713f8602-6e31-4d2c-b66e-ddcc856f3d96,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c,PodSandboxId:28a5e497d421cd446ab1fa053ce2a1d367f385d8d7480426d8bdb7269cb0ac01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606136476368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9792g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61c6326d-4d13-49b7-9fae-c4f8906c3c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a,PodSandboxId:8986bab4f9538153cc0450face96ec55ad84843bf30b83618639ea1d89ad002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606113487368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pftgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
eb45cffe-b666-435b-ada3-605735ee43c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a62ed3f6ca851b9e241ceaebd0beefc01fd7dbe3845409d02a901b02942a75,PodSandboxId:24f95152f109449addd385310712e35defdffe2d831c55b614cb2f59527ce4e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733784605136123919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb31984d-7602-406c-8c77-ce8571cdaa52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a,PodSandboxId:91e324c9c3171a25cf06362a62d77103a523e6d0da9b6a82fa14b606f4828c5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733784593211202860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rcctv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9580e78-cfcb-45b6-9e90-f37ce3881c6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678,PodSandboxId:7d30b07a36a6c7c6f07f44ce91d2e958657f533c11a16e501836ebf4eab9c339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733784589
863424703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8nhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355570de-1c30-4aa2-a56c-a06639cc339c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845a7a9380501aac2e0992c1b802cf4bc3bd5aea4f43d2d2d4f10c946c1c66f,PodSandboxId:dcec6011252c41298b12a101fb51c8b91c1b38ea9bc0151b6980f69527186ed1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378458116
6459615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d35c5d117675d3f6b1e3496412ccf95,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581,PodSandboxId:a053c05339f972a11da8b5a3a39e9c14b9110864ea563f08bd07327fd6e6af10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733784578334605737,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47ddf9d5365fec6250c855056c8f531,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a,PodSandboxId:7dd45ba230f9006172f49368d56abbfa8ac1f3fd32cbd4ded30df3d9e90be698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733784578293475702,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5422d6506f773ffbf1f21f835466832,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9,PodSandboxId:5b9cd68863c1403e489bdf6102dc4a79d4207d1f0150b4a356a2e0ad3bfc1b7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733784578260543093,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953f998529f26865f22e300e139c6849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963,PodSandboxId:ba6c2156966ab967451ebfb65361e3245e35dc076d6271f2db162ddc9498d085,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733784578211411837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f1665c4aeae8e7befddb7a386efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d9656711-bfae-4cb7-b04a-1960eb0798ac name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.787990896Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29d7fde3-5efd-45ea-bbe5-c9abe36551be name=/runtime.v1.RuntimeService/Version
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.788081017Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29d7fde3-5efd-45ea-bbe5-c9abe36551be name=/runtime.v1.RuntimeService/Version
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.789329278Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5dcdd4f9-262f-43b0-a2ac-b29945db1061 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.790127776Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784968790072683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5dcdd4f9-262f-43b0-a2ac-b29945db1061 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.790654632Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c26799b1-e730-4a56-af12-19e41d1cf218 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.790752369Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c26799b1-e730-4a56-af12-19e41d1cf218 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.790982992Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2098445c3438f8b0d5a49c50ab807d4692815a97d8732b20b47c76ff9381a76,PodSandboxId:32c399f593c298fd59b6593cd718d6d5d6899cacbe2ca9a451eae0cce6586d32,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733784740856215420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4dbs2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 713f8602-6e31-4d2c-b66e-ddcc856f3d96,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c,PodSandboxId:28a5e497d421cd446ab1fa053ce2a1d367f385d8d7480426d8bdb7269cb0ac01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606136476368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9792g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61c6326d-4d13-49b7-9fae-c4f8906c3c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a,PodSandboxId:8986bab4f9538153cc0450face96ec55ad84843bf30b83618639ea1d89ad002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606113487368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pftgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
eb45cffe-b666-435b-ada3-605735ee43c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a62ed3f6ca851b9e241ceaebd0beefc01fd7dbe3845409d02a901b02942a75,PodSandboxId:24f95152f109449addd385310712e35defdffe2d831c55b614cb2f59527ce4e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733784605136123919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb31984d-7602-406c-8c77-ce8571cdaa52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a,PodSandboxId:91e324c9c3171a25cf06362a62d77103a523e6d0da9b6a82fa14b606f4828c5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733784593211202860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rcctv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9580e78-cfcb-45b6-9e90-f37ce3881c6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678,PodSandboxId:7d30b07a36a6c7c6f07f44ce91d2e958657f533c11a16e501836ebf4eab9c339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733784589
863424703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8nhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355570de-1c30-4aa2-a56c-a06639cc339c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845a7a9380501aac2e0992c1b802cf4bc3bd5aea4f43d2d2d4f10c946c1c66f,PodSandboxId:dcec6011252c41298b12a101fb51c8b91c1b38ea9bc0151b6980f69527186ed1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378458116
6459615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d35c5d117675d3f6b1e3496412ccf95,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581,PodSandboxId:a053c05339f972a11da8b5a3a39e9c14b9110864ea563f08bd07327fd6e6af10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733784578334605737,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47ddf9d5365fec6250c855056c8f531,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a,PodSandboxId:7dd45ba230f9006172f49368d56abbfa8ac1f3fd32cbd4ded30df3d9e90be698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733784578293475702,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5422d6506f773ffbf1f21f835466832,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9,PodSandboxId:5b9cd68863c1403e489bdf6102dc4a79d4207d1f0150b4a356a2e0ad3bfc1b7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733784578260543093,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953f998529f26865f22e300e139c6849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963,PodSandboxId:ba6c2156966ab967451ebfb65361e3245e35dc076d6271f2db162ddc9498d085,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733784578211411837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f1665c4aeae8e7befddb7a386efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c26799b1-e730-4a56-af12-19e41d1cf218 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.829113670Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a268829-cf0c-4399-81b0-4ce72b27d81f name=/runtime.v1.RuntimeService/Version
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.829207656Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a268829-cf0c-4399-81b0-4ce72b27d81f name=/runtime.v1.RuntimeService/Version
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.830357705Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6fd30589-df79-4589-9ed8-c18143092f7a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.830964994Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784968830936912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6fd30589-df79-4589-9ed8-c18143092f7a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.831470701Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=972cde7a-1cac-486d-bae4-2a7b9d8b0365 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.831552838Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=972cde7a-1cac-486d-bae4-2a7b9d8b0365 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.831832976Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2098445c3438f8b0d5a49c50ab807d4692815a97d8732b20b47c76ff9381a76,PodSandboxId:32c399f593c298fd59b6593cd718d6d5d6899cacbe2ca9a451eae0cce6586d32,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733784740856215420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4dbs2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 713f8602-6e31-4d2c-b66e-ddcc856f3d96,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c,PodSandboxId:28a5e497d421cd446ab1fa053ce2a1d367f385d8d7480426d8bdb7269cb0ac01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606136476368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9792g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61c6326d-4d13-49b7-9fae-c4f8906c3c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a,PodSandboxId:8986bab4f9538153cc0450face96ec55ad84843bf30b83618639ea1d89ad002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606113487368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pftgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
eb45cffe-b666-435b-ada3-605735ee43c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a62ed3f6ca851b9e241ceaebd0beefc01fd7dbe3845409d02a901b02942a75,PodSandboxId:24f95152f109449addd385310712e35defdffe2d831c55b614cb2f59527ce4e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733784605136123919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb31984d-7602-406c-8c77-ce8571cdaa52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a,PodSandboxId:91e324c9c3171a25cf06362a62d77103a523e6d0da9b6a82fa14b606f4828c5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733784593211202860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rcctv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9580e78-cfcb-45b6-9e90-f37ce3881c6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678,PodSandboxId:7d30b07a36a6c7c6f07f44ce91d2e958657f533c11a16e501836ebf4eab9c339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733784589
863424703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8nhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355570de-1c30-4aa2-a56c-a06639cc339c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845a7a9380501aac2e0992c1b802cf4bc3bd5aea4f43d2d2d4f10c946c1c66f,PodSandboxId:dcec6011252c41298b12a101fb51c8b91c1b38ea9bc0151b6980f69527186ed1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378458116
6459615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d35c5d117675d3f6b1e3496412ccf95,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581,PodSandboxId:a053c05339f972a11da8b5a3a39e9c14b9110864ea563f08bd07327fd6e6af10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733784578334605737,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47ddf9d5365fec6250c855056c8f531,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a,PodSandboxId:7dd45ba230f9006172f49368d56abbfa8ac1f3fd32cbd4ded30df3d9e90be698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733784578293475702,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5422d6506f773ffbf1f21f835466832,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9,PodSandboxId:5b9cd68863c1403e489bdf6102dc4a79d4207d1f0150b4a356a2e0ad3bfc1b7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733784578260543093,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953f998529f26865f22e300e139c6849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963,PodSandboxId:ba6c2156966ab967451ebfb65361e3245e35dc076d6271f2db162ddc9498d085,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733784578211411837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f1665c4aeae8e7befddb7a386efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=972cde7a-1cac-486d-bae4-2a7b9d8b0365 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.868199420Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d6fc319-634d-4493-b948-c81e1758f21e name=/runtime.v1.RuntimeService/Version
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.868278567Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d6fc319-634d-4493-b948-c81e1758f21e name=/runtime.v1.RuntimeService/Version
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.869185746Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8fbf862-32fa-423c-a019-c0aa3f24deb1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.869629330Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784968869606709,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8fbf862-32fa-423c-a019-c0aa3f24deb1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.873078849Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b170cb1a-b9a1-41b1-b983-aa5a3f1bed3a name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.873223656Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b170cb1a-b9a1-41b1-b983-aa5a3f1bed3a name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:08 ha-920193 crio[663]: time="2024-12-09 22:56:08.874530189Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2098445c3438f8b0d5a49c50ab807d4692815a97d8732b20b47c76ff9381a76,PodSandboxId:32c399f593c298fd59b6593cd718d6d5d6899cacbe2ca9a451eae0cce6586d32,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733784740856215420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4dbs2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 713f8602-6e31-4d2c-b66e-ddcc856f3d96,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c,PodSandboxId:28a5e497d421cd446ab1fa053ce2a1d367f385d8d7480426d8bdb7269cb0ac01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606136476368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9792g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61c6326d-4d13-49b7-9fae-c4f8906c3c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a,PodSandboxId:8986bab4f9538153cc0450face96ec55ad84843bf30b83618639ea1d89ad002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606113487368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pftgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
eb45cffe-b666-435b-ada3-605735ee43c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a62ed3f6ca851b9e241ceaebd0beefc01fd7dbe3845409d02a901b02942a75,PodSandboxId:24f95152f109449addd385310712e35defdffe2d831c55b614cb2f59527ce4e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733784605136123919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb31984d-7602-406c-8c77-ce8571cdaa52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a,PodSandboxId:91e324c9c3171a25cf06362a62d77103a523e6d0da9b6a82fa14b606f4828c5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733784593211202860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rcctv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9580e78-cfcb-45b6-9e90-f37ce3881c6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678,PodSandboxId:7d30b07a36a6c7c6f07f44ce91d2e958657f533c11a16e501836ebf4eab9c339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733784589
863424703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8nhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355570de-1c30-4aa2-a56c-a06639cc339c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845a7a9380501aac2e0992c1b802cf4bc3bd5aea4f43d2d2d4f10c946c1c66f,PodSandboxId:dcec6011252c41298b12a101fb51c8b91c1b38ea9bc0151b6980f69527186ed1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378458116
6459615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d35c5d117675d3f6b1e3496412ccf95,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581,PodSandboxId:a053c05339f972a11da8b5a3a39e9c14b9110864ea563f08bd07327fd6e6af10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733784578334605737,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47ddf9d5365fec6250c855056c8f531,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a,PodSandboxId:7dd45ba230f9006172f49368d56abbfa8ac1f3fd32cbd4ded30df3d9e90be698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733784578293475702,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5422d6506f773ffbf1f21f835466832,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9,PodSandboxId:5b9cd68863c1403e489bdf6102dc4a79d4207d1f0150b4a356a2e0ad3bfc1b7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733784578260543093,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953f998529f26865f22e300e139c6849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963,PodSandboxId:ba6c2156966ab967451ebfb65361e3245e35dc076d6271f2db162ddc9498d085,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733784578211411837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f1665c4aeae8e7befddb7a386efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b170cb1a-b9a1-41b1-b983-aa5a3f1bed3a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b2098445c3438       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   32c399f593c29       busybox-7dff88458-4dbs2
	14b80feac0f9a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   28a5e497d421c       coredns-7c65d6cfc9-9792g
	6bdcee2ff30bb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   8986bab4f9538       coredns-7c65d6cfc9-pftgv
	a6a62ed3f6ca8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   24f95152f1094       storage-provisioner
	d26f562ad5527       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3    6 minutes ago       Running             kindnet-cni               0                   91e324c9c3171       kindnet-rcctv
	233aa49869db4       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   7d30b07a36a6c       kube-proxy-r8nhm
	b845a7a938050       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   dcec6011252c4       kube-vip-ha-920193
	2c5a043b38715       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   a053c05339f97       kube-apiserver-ha-920193
	f0a29f1dc44e4       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   7dd45ba230f90       kube-controller-manager-ha-920193
	b8197a166eeaf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   5b9cd68863c14       etcd-ha-920193
	6ee0fecee78f0       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   ba6c2156966ab       kube-scheduler-ha-920193
	
	
	==> coredns [14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c] <==
	[INFO] 10.244.2.2:60285 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00013048s
	[INFO] 10.244.0.4:42105 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201273s
	[INFO] 10.244.0.4:33722 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003973627s
	[INFO] 10.244.0.4:50780 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003385872s
	[INFO] 10.244.0.4:46762 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000330906s
	[INFO] 10.244.0.4:41821 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099413s
	[INFO] 10.244.1.2:38814 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000240081s
	[INFO] 10.244.1.2:51472 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001124121s
	[INFO] 10.244.1.2:49496 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094508s
	[INFO] 10.244.2.2:44597 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168981s
	[INFO] 10.244.2.2:56334 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001450617s
	[INFO] 10.244.2.2:52317 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077228s
	[INFO] 10.244.0.4:57299 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133066s
	[INFO] 10.244.0.4:56277 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119106s
	[INFO] 10.244.0.4:45466 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000040838s
	[INFO] 10.244.1.2:44460 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200839s
	[INFO] 10.244.2.2:38498 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135133s
	[INFO] 10.244.2.2:50433 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00021653s
	[INFO] 10.244.2.2:49338 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098224s
	[INFO] 10.244.0.4:33757 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178322s
	[INFO] 10.244.0.4:48357 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000197259s
	[INFO] 10.244.0.4:36014 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000126459s
	[INFO] 10.244.1.2:50940 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000306385s
	[INFO] 10.244.2.2:39693 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191708s
	[INFO] 10.244.2.2:43130 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156713s
	
	
	==> coredns [6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a] <==
	[INFO] 10.244.2.2:53803 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001802154s
	[INFO] 10.244.0.4:53804 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136883s
	[INFO] 10.244.0.4:33536 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133128s
	[INFO] 10.244.0.4:40697 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109987s
	[INFO] 10.244.1.2:60686 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001746087s
	[INFO] 10.244.1.2:57981 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000176425s
	[INFO] 10.244.1.2:42922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001279s
	[INFO] 10.244.1.2:49248 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000199359s
	[INFO] 10.244.1.2:56349 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176613s
	[INFO] 10.244.2.2:37288 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194316s
	[INFO] 10.244.2.2:36807 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001853178s
	[INFO] 10.244.2.2:47892 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097133s
	[INFO] 10.244.2.2:50492 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000249713s
	[INFO] 10.244.2.2:42642 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102673s
	[INFO] 10.244.0.4:45744 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170409s
	[INFO] 10.244.1.2:36488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227015s
	[INFO] 10.244.1.2:37416 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118932s
	[INFO] 10.244.1.2:48536 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176061s
	[INFO] 10.244.2.2:47072 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110597s
	[INFO] 10.244.0.4:58052 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000268133s
	[INFO] 10.244.1.2:38540 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000277422s
	[INFO] 10.244.1.2:55804 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000232786s
	[INFO] 10.244.1.2:35281 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000214405s
	[INFO] 10.244.2.2:37415 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174588s
	[INFO] 10.244.2.2:32790 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097554s
	
	
	==> describe nodes <==
	Name:               ha-920193
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-920193
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=ha-920193
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T22_49_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 22:49:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-920193
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 22:56:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 22:52:47 +0000   Mon, 09 Dec 2024 22:49:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 22:52:47 +0000   Mon, 09 Dec 2024 22:49:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 22:52:47 +0000   Mon, 09 Dec 2024 22:49:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 22:52:47 +0000   Mon, 09 Dec 2024 22:50:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-920193
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9825096d628741caa811f99c10cc6460
	  System UUID:                9825096d-6287-41ca-a811-f99c10cc6460
	  Boot ID:                    7af2b544-54c4-4e33-8dc8-e2313bb29389
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4dbs2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 coredns-7c65d6cfc9-9792g             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m20s
	  kube-system                 coredns-7c65d6cfc9-pftgv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m20s
	  kube-system                 etcd-ha-920193                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m25s
	  kube-system                 kindnet-rcctv                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m21s
	  kube-system                 kube-apiserver-ha-920193             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-controller-manager-ha-920193    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-proxy-r8nhm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-scheduler-ha-920193             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-vip-ha-920193                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m18s  kube-proxy       
	  Normal  Starting                 6m25s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m25s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m25s  kubelet          Node ha-920193 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m25s  kubelet          Node ha-920193 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m25s  kubelet          Node ha-920193 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m21s  node-controller  Node ha-920193 event: Registered Node ha-920193 in Controller
	  Normal  NodeReady                6m5s   kubelet          Node ha-920193 status is now: NodeReady
	  Normal  RegisteredNode           5m26s  node-controller  Node ha-920193 event: Registered Node ha-920193 in Controller
	  Normal  RegisteredNode           4m12s  node-controller  Node ha-920193 event: Registered Node ha-920193 in Controller
	
	
	Name:               ha-920193-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-920193-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=ha-920193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T22_50_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 22:50:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-920193-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 22:53:29 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 09 Dec 2024 22:52:38 +0000   Mon, 09 Dec 2024 22:54:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 09 Dec 2024 22:52:38 +0000   Mon, 09 Dec 2024 22:54:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 09 Dec 2024 22:52:38 +0000   Mon, 09 Dec 2024 22:54:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 09 Dec 2024 22:52:38 +0000   Mon, 09 Dec 2024 22:54:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    ha-920193-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 418684ffa8244b8180cf28f3a347b4c2
	  System UUID:                418684ff-a824-4b81-80cf-28f3a347b4c2
	  Boot ID:                    15131626-aa5d-4727-aedd-7039ff10fa6a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rkqdv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 etcd-ha-920193-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m31s
	  kube-system                 kindnet-7bbbc                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m33s
	  kube-system                 kube-apiserver-ha-920193-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-controller-manager-ha-920193-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-proxy-lntbt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-scheduler-ha-920193-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-vip-ha-920193-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m30s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m33s (x8 over 5m34s)  kubelet          Node ha-920193-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m33s (x8 over 5m34s)  kubelet          Node ha-920193-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m33s (x7 over 5m34s)  kubelet          Node ha-920193-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m31s                  node-controller  Node ha-920193-m02 event: Registered Node ha-920193-m02 in Controller
	  Normal  RegisteredNode           5m26s                  node-controller  Node ha-920193-m02 event: Registered Node ha-920193-m02 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-920193-m02 event: Registered Node ha-920193-m02 in Controller
	  Normal  NodeNotReady             117s                   node-controller  Node ha-920193-m02 status is now: NodeNotReady
	
	
	Name:               ha-920193-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-920193-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=ha-920193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T22_51_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 22:51:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-920193-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 22:56:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 22:52:50 +0000   Mon, 09 Dec 2024 22:51:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 22:52:50 +0000   Mon, 09 Dec 2024 22:51:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 22:52:50 +0000   Mon, 09 Dec 2024 22:51:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 22:52:50 +0000   Mon, 09 Dec 2024 22:52:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.45
	  Hostname:    ha-920193-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c09ac2bcafe5487187b79c07f4dd9720
	  System UUID:                c09ac2bc-afe5-4871-87b7-9c07f4dd9720
	  Boot ID:                    1fbc2da5-2f05-4c65-92cc-ea55dc184e77
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zshqx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 etcd-ha-920193-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kindnet-drj9q                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m20s
	  kube-system                 kube-apiserver-ha-920193-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-controller-manager-ha-920193-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-proxy-pr7zk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-scheduler-ha-920193-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-vip-ha-920193-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m15s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     4m20s                  cidrAllocator    Node ha-920193-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m20s (x8 over 4m20s)  kubelet          Node ha-920193-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s (x8 over 4m20s)  kubelet          Node ha-920193-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s (x7 over 4m20s)  kubelet          Node ha-920193-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-920193-m03 event: Registered Node ha-920193-m03 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-920193-m03 event: Registered Node ha-920193-m03 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-920193-m03 event: Registered Node ha-920193-m03 in Controller
	
	
	Name:               ha-920193-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-920193-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=ha-920193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T22_52_56_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 22:52:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-920193-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 22:55:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 22:53:25 +0000   Mon, 09 Dec 2024 22:52:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 22:53:25 +0000   Mon, 09 Dec 2024 22:52:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 22:53:25 +0000   Mon, 09 Dec 2024 22:52:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 22:53:25 +0000   Mon, 09 Dec 2024 22:53:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.98
	  Hostname:    ha-920193-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4a2dbc042e3045febd5c0c9d1b2c22ec
	  System UUID:                4a2dbc04-2e30-45fe-bd5c-0c9d1b2c22ec
	  Boot ID:                    1261e6c2-362c-4edd-9457-2b833cda280a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4pzwv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m14s
	  kube-system                 kube-proxy-7d45n    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  CIDRAssignmentFailed     3m14s                  cidrAllocator    Node ha-920193-m04 status is now: CIDRAssignmentFailed
	  Normal  Starting                 3m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m14s (x2 over 3m14s)  kubelet          Node ha-920193-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m14s (x2 over 3m14s)  kubelet          Node ha-920193-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m14s (x2 over 3m14s)  kubelet          Node ha-920193-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-920193-m04 event: Registered Node ha-920193-m04 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-920193-m04 event: Registered Node ha-920193-m04 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-920193-m04 event: Registered Node ha-920193-m04 in Controller
	  Normal  NodeReady                2m53s                  kubelet          Node ha-920193-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 9 22:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049320] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036387] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.848109] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.938823] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.563382] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.738770] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.057878] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055312] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.165760] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.148687] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.252407] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.807769] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +4.142269] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.067556] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.253709] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.082838] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.454038] kauditd_printk_skb: 21 callbacks suppressed
	[Dec 9 22:50] kauditd_printk_skb: 40 callbacks suppressed
	[ +36.675272] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9] <==
	{"level":"warn","ts":"2024-12-09T22:56:09.071568Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.115701Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.123425Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.127805Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.141165Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.147424Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.149439Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.153984Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.158599Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.162376Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.169780Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.181333Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.188187Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.191823Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.195015Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.201746Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.209509Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.219761Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.224267Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.225653Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.229235Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.233086Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.240567Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.246383Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:09.249760Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 22:56:09 up 7 min,  0 users,  load average: 0.40, 0.26, 0.13
	Linux ha-920193 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a] <==
	I1209 22:55:34.244924       1 main.go:324] Node ha-920193-m03 has CIDR [10.244.2.0/24] 
	I1209 22:55:44.241125       1 main.go:297] Handling node with IPs: map[192.168.39.45:{}]
	I1209 22:55:44.241179       1 main.go:324] Node ha-920193-m03 has CIDR [10.244.2.0/24] 
	I1209 22:55:44.241517       1 main.go:297] Handling node with IPs: map[192.168.39.98:{}]
	I1209 22:55:44.241554       1 main.go:324] Node ha-920193-m04 has CIDR [10.244.3.0/24] 
	I1209 22:55:44.242208       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1209 22:55:44.242246       1 main.go:301] handling current node
	I1209 22:55:44.242264       1 main.go:297] Handling node with IPs: map[192.168.39.43:{}]
	I1209 22:55:44.242279       1 main.go:324] Node ha-920193-m02 has CIDR [10.244.1.0/24] 
	I1209 22:55:54.237055       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1209 22:55:54.237098       1 main.go:301] handling current node
	I1209 22:55:54.237112       1 main.go:297] Handling node with IPs: map[192.168.39.43:{}]
	I1209 22:55:54.237117       1 main.go:324] Node ha-920193-m02 has CIDR [10.244.1.0/24] 
	I1209 22:55:54.237320       1 main.go:297] Handling node with IPs: map[192.168.39.45:{}]
	I1209 22:55:54.237342       1 main.go:324] Node ha-920193-m03 has CIDR [10.244.2.0/24] 
	I1209 22:55:54.237447       1 main.go:297] Handling node with IPs: map[192.168.39.98:{}]
	I1209 22:55:54.237463       1 main.go:324] Node ha-920193-m04 has CIDR [10.244.3.0/24] 
	I1209 22:56:04.236382       1 main.go:297] Handling node with IPs: map[192.168.39.45:{}]
	I1209 22:56:04.236482       1 main.go:324] Node ha-920193-m03 has CIDR [10.244.2.0/24] 
	I1209 22:56:04.236733       1 main.go:297] Handling node with IPs: map[192.168.39.98:{}]
	I1209 22:56:04.236768       1 main.go:324] Node ha-920193-m04 has CIDR [10.244.3.0/24] 
	I1209 22:56:04.236884       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1209 22:56:04.236908       1 main.go:301] handling current node
	I1209 22:56:04.236931       1 main.go:297] Handling node with IPs: map[192.168.39.43:{}]
	I1209 22:56:04.236947       1 main.go:324] Node ha-920193-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581] <==
	W1209 22:49:43.150982       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102]
	I1209 22:49:43.152002       1 controller.go:615] quota admission added evaluator for: endpoints
	I1209 22:49:43.156330       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 22:49:43.387632       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1209 22:49:44.564732       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1209 22:49:44.579130       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1209 22:49:44.588831       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1209 22:49:48.591895       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1209 22:49:48.841334       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1209 22:52:22.354256       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36040: use of closed network connection
	E1209 22:52:22.536970       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36064: use of closed network connection
	E1209 22:52:22.712523       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36088: use of closed network connection
	E1209 22:52:22.898417       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36102: use of closed network connection
	E1209 22:52:23.071122       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36126: use of closed network connection
	E1209 22:52:23.250546       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36138: use of closed network connection
	E1209 22:52:23.423505       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36152: use of closed network connection
	E1209 22:52:23.596493       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36174: use of closed network connection
	E1209 22:52:23.770267       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36200: use of closed network connection
	E1209 22:52:24.059362       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36220: use of closed network connection
	E1209 22:52:24.222108       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36234: use of closed network connection
	E1209 22:52:24.394542       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36254: use of closed network connection
	E1209 22:52:24.570825       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36280: use of closed network connection
	E1209 22:52:24.742045       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36308: use of closed network connection
	E1209 22:52:24.918566       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36330: use of closed network connection
	W1209 22:53:53.164722       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.45]
	
	
	==> kube-controller-manager [f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a] <==
	I1209 22:52:55.696316       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	E1209 22:52:55.827513       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"d21ce5c2-c9ae-46d3-8e56-962d14b633c9\", ResourceVersion:\"913\", Generation:1, CreationTimestamp:time.Date(2024, time.December, 9, 22, 49, 45, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\\\
",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20241108-5c6d2daf\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\\
\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00247f6a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\
"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0026282e8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolume
ClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002628300), EmptyDir:(*v1.EmptyDirVolumeSource)
(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.Portworx
VolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002628318), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), Az
ureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20241108-5c6d2daf\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc00247f6c0)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarS
ource)(0xc00247f700)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:fals
e, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc00298a060), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralCont
ainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc002895a00), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002509e80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), O
verhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0027a7a80)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002895a3c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1209 22:52:55.828552       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"6fe45e3d-72f3-4c58-8284-ee89d6d57a36\", ResourceVersion:\"871\", Generation:1, CreationTimestamp:time.Date(2024, time.December, 9, 22, 49, 44, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00197c7a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\"
, Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)
(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00265ecc0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002193ae8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolume
Source)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVol
umeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002193b00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtual
DiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.31.2\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc00197c7e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Reso
urceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"
/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0026ee600), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc002860a98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0025a4880), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostA
lias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002693bd0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002860af0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled
on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1209 22:52:56.102815       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:57.678400       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:58.159889       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:58.160065       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-920193-m04"
	I1209 22:52:58.180925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:58.828069       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:58.908919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:05.805409       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:16.012967       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-920193-m04"
	I1209 22:53:16.013430       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:16.029012       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:17.646042       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:25.994489       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:54:12.667473       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-920193-m04"
	I1209 22:54:12.668375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m02"
	I1209 22:54:12.690072       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m02"
	I1209 22:54:12.722935       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.821273ms"
	I1209 22:54:12.724268       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.814µs"
	I1209 22:54:13.270393       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m02"
	I1209 22:54:17.915983       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m02"
	
	
	==> kube-proxy [233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 22:49:50.258403       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 22:49:50.274620       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	E1209 22:49:50.274749       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 22:49:50.309286       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 22:49:50.309340       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 22:49:50.309367       1 server_linux.go:169] "Using iptables Proxier"
	I1209 22:49:50.311514       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 22:49:50.312044       1 server.go:483] "Version info" version="v1.31.2"
	I1209 22:49:50.312073       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 22:49:50.314372       1 config.go:199] "Starting service config controller"
	I1209 22:49:50.314401       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 22:49:50.314584       1 config.go:105] "Starting endpoint slice config controller"
	I1209 22:49:50.314607       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 22:49:50.315221       1 config.go:328] "Starting node config controller"
	I1209 22:49:50.315250       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 22:49:50.415190       1 shared_informer.go:320] Caches are synced for service config
	I1209 22:49:50.415151       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 22:49:50.415308       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963] <==
	W1209 22:49:42.622383       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 22:49:42.622920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 22:49:42.673980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1209 22:49:42.674373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 22:49:42.700294       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 22:49:42.700789       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1209 22:49:44.393323       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1209 22:52:18.167059       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rkqdv\": pod busybox-7dff88458-rkqdv is already assigned to node \"ha-920193-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rkqdv" node="ha-920193-m02"
	E1209 22:52:18.167170       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c1517f25-fc19-4255-b4c6-9a02511b80c3(default/busybox-7dff88458-rkqdv) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-rkqdv"
	E1209 22:52:18.167196       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rkqdv\": pod busybox-7dff88458-rkqdv is already assigned to node \"ha-920193-m02\"" pod="default/busybox-7dff88458-rkqdv"
	I1209 22:52:18.167215       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rkqdv" node="ha-920193-m02"
	E1209 22:52:55.621239       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x5mqb\": pod kindnet-x5mqb is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-x5mqb" node="ha-920193-m04"
	E1209 22:52:55.621341       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x5mqb\": pod kindnet-x5mqb is already assigned to node \"ha-920193-m04\"" pod="kube-system/kindnet-x5mqb"
	E1209 22:52:55.648021       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-k5v9w\": pod kube-proxy-k5v9w is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-k5v9w" node="ha-920193-m04"
	E1209 22:52:55.648095       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5882629a-a929-45e4-b026-e75a2c17d56d(kube-system/kube-proxy-k5v9w) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-k5v9w"
	E1209 22:52:55.648113       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-k5v9w\": pod kube-proxy-k5v9w is already assigned to node \"ha-920193-m04\"" pod="kube-system/kube-proxy-k5v9w"
	I1209 22:52:55.648138       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-k5v9w" node="ha-920193-m04"
	E1209 22:52:55.758943       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mp7q7\": pod kube-proxy-mp7q7 is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mp7q7" node="ha-920193-m04"
	E1209 22:52:55.759080       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a4d32bae-6ec6-4338-8689-3b32518b021b(kube-system/kube-proxy-mp7q7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-mp7q7"
	E1209 22:52:55.759142       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mp7q7\": pod kube-proxy-mp7q7 is already assigned to node \"ha-920193-m04\"" pod="kube-system/kube-proxy-mp7q7"
	I1209 22:52:55.759188       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-mp7q7" node="ha-920193-m04"
	E1209 22:52:55.775999       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7d45n\": pod kube-proxy-7d45n is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7d45n" node="ha-920193-m04"
	E1209 22:52:55.776095       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7d45n\": pod kube-proxy-7d45n is already assigned to node \"ha-920193-m04\"" pod="kube-system/kube-proxy-7d45n"
	E1209 22:52:55.784854       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4pzwv\": pod kindnet-4pzwv is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4pzwv" node="ha-920193-m04"
	E1209 22:52:55.785146       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4pzwv\": pod kindnet-4pzwv is already assigned to node \"ha-920193-m04\"" pod="kube-system/kindnet-4pzwv"
	
	
	==> kubelet <==
	Dec 09 22:54:44 ha-920193 kubelet[1302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 22:54:44 ha-920193 kubelet[1302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 22:54:44 ha-920193 kubelet[1302]: E1209 22:54:44.581397    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784884581065881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:54:44 ha-920193 kubelet[1302]: E1209 22:54:44.581439    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784884581065881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:54:54 ha-920193 kubelet[1302]: E1209 22:54:54.583096    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784894582573620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:54:54 ha-920193 kubelet[1302]: E1209 22:54:54.583476    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784894582573620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:04 ha-920193 kubelet[1302]: E1209 22:55:04.587043    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784904586404563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:04 ha-920193 kubelet[1302]: E1209 22:55:04.587520    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784904586404563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:14 ha-920193 kubelet[1302]: E1209 22:55:14.590203    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784914589554601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:14 ha-920193 kubelet[1302]: E1209 22:55:14.590522    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784914589554601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:24 ha-920193 kubelet[1302]: E1209 22:55:24.593898    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784924593226467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:24 ha-920193 kubelet[1302]: E1209 22:55:24.593942    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784924593226467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:34 ha-920193 kubelet[1302]: E1209 22:55:34.596079    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784934595337713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:34 ha-920193 kubelet[1302]: E1209 22:55:34.596564    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784934595337713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:44 ha-920193 kubelet[1302]: E1209 22:55:44.520346    1302 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 09 22:55:44 ha-920193 kubelet[1302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 22:55:44 ha-920193 kubelet[1302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 22:55:44 ha-920193 kubelet[1302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 22:55:44 ha-920193 kubelet[1302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 22:55:44 ha-920193 kubelet[1302]: E1209 22:55:44.598917    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784944598396332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:44 ha-920193 kubelet[1302]: E1209 22:55:44.598999    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784944598396332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:54 ha-920193 kubelet[1302]: E1209 22:55:54.601949    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784954601550343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:54 ha-920193 kubelet[1302]: E1209 22:55:54.602225    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784954601550343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:56:04 ha-920193 kubelet[1302]: E1209 22:56:04.604279    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784964603929270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:56:04 ha-920193 kubelet[1302]: E1209 22:56:04.604303    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784964603929270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-920193 -n ha-920193
helpers_test.go:261: (dbg) Run:  kubectl --context ha-920193 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.165797924s)
ha_test.go:309: expected profile "ha-920193" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-920193\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-920193\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-920193\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.102\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.43\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.45\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.98\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":
false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"M
ountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-920193 -n ha-920193
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-920193 logs -n 25: (1.252405124s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193:/home/docker/cp-test_ha-920193-m03_ha-920193.txt                       |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193 sudo cat                                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m03_ha-920193.txt                                 |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m02:/home/docker/cp-test_ha-920193-m03_ha-920193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m02 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m03_ha-920193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04:/home/docker/cp-test_ha-920193-m03_ha-920193-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m04 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m03_ha-920193-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-920193 cp testdata/cp-test.txt                                                | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3609340860/001/cp-test_ha-920193-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193:/home/docker/cp-test_ha-920193-m04_ha-920193.txt                       |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193 sudo cat                                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m04_ha-920193.txt                                 |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m02:/home/docker/cp-test_ha-920193-m04_ha-920193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m02 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m04_ha-920193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03:/home/docker/cp-test_ha-920193-m04_ha-920193-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m03 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m04_ha-920193-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-920193 node stop m02 -v=7                                                     | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-920193 node start m02 -v=7                                                    | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 22:49:03
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 22:49:03.145250   36778 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:49:03.145390   36778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:49:03.145399   36778 out.go:358] Setting ErrFile to fd 2...
	I1209 22:49:03.145404   36778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:49:03.145610   36778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 22:49:03.146205   36778 out.go:352] Setting JSON to false
	I1209 22:49:03.147113   36778 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5494,"bootTime":1733779049,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 22:49:03.147209   36778 start.go:139] virtualization: kvm guest
	I1209 22:49:03.149227   36778 out.go:177] * [ha-920193] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 22:49:03.150446   36778 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 22:49:03.150468   36778 notify.go:220] Checking for updates...
	I1209 22:49:03.152730   36778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 22:49:03.153842   36778 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:49:03.154957   36778 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:03.156087   36778 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 22:49:03.157179   36778 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 22:49:03.158417   36778 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 22:49:03.193867   36778 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 22:49:03.195030   36778 start.go:297] selected driver: kvm2
	I1209 22:49:03.195046   36778 start.go:901] validating driver "kvm2" against <nil>
	I1209 22:49:03.195060   36778 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 22:49:03.196334   36778 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:49:03.196484   36778 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 22:49:03.213595   36778 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 22:49:03.213648   36778 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 22:49:03.213994   36778 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 22:49:03.214030   36778 cni.go:84] Creating CNI manager for ""
	I1209 22:49:03.214072   36778 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1209 22:49:03.214085   36778 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 22:49:03.214141   36778 start.go:340] cluster config:
	{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1209 22:49:03.214261   36778 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:49:03.215829   36778 out.go:177] * Starting "ha-920193" primary control-plane node in "ha-920193" cluster
	I1209 22:49:03.216947   36778 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:49:03.216988   36778 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 22:49:03.217002   36778 cache.go:56] Caching tarball of preloaded images
	I1209 22:49:03.217077   36778 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 22:49:03.217091   36778 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 22:49:03.217507   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:49:03.217534   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json: {Name:mk69f8481a2f9361b3b46196caa6653a8d77a9fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:03.217729   36778 start.go:360] acquireMachinesLock for ha-920193: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 22:49:03.217779   36778 start.go:364] duration metric: took 30.111µs to acquireMachinesLock for "ha-920193"
	I1209 22:49:03.217805   36778 start.go:93] Provisioning new machine with config: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:49:03.217887   36778 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 22:49:03.219504   36778 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 22:49:03.219675   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:03.219709   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:03.234776   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39205
	I1209 22:49:03.235235   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:03.235843   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:03.235867   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:03.236261   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:03.236466   36778 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:49:03.236632   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:03.236794   36778 start.go:159] libmachine.API.Create for "ha-920193" (driver="kvm2")
	I1209 22:49:03.236821   36778 client.go:168] LocalClient.Create starting
	I1209 22:49:03.236862   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem
	I1209 22:49:03.236900   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:49:03.236922   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:49:03.237001   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem
	I1209 22:49:03.237033   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:49:03.237054   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:49:03.237078   36778 main.go:141] libmachine: Running pre-create checks...
	I1209 22:49:03.237090   36778 main.go:141] libmachine: (ha-920193) Calling .PreCreateCheck
	I1209 22:49:03.237426   36778 main.go:141] libmachine: (ha-920193) Calling .GetConfigRaw
	I1209 22:49:03.237793   36778 main.go:141] libmachine: Creating machine...
	I1209 22:49:03.237806   36778 main.go:141] libmachine: (ha-920193) Calling .Create
	I1209 22:49:03.237934   36778 main.go:141] libmachine: (ha-920193) Creating KVM machine...
	I1209 22:49:03.239483   36778 main.go:141] libmachine: (ha-920193) DBG | found existing default KVM network
	I1209 22:49:03.240340   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.240142   36801 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I1209 22:49:03.240365   36778 main.go:141] libmachine: (ha-920193) DBG | created network xml: 
	I1209 22:49:03.240393   36778 main.go:141] libmachine: (ha-920193) DBG | <network>
	I1209 22:49:03.240407   36778 main.go:141] libmachine: (ha-920193) DBG |   <name>mk-ha-920193</name>
	I1209 22:49:03.240417   36778 main.go:141] libmachine: (ha-920193) DBG |   <dns enable='no'/>
	I1209 22:49:03.240427   36778 main.go:141] libmachine: (ha-920193) DBG |   
	I1209 22:49:03.240438   36778 main.go:141] libmachine: (ha-920193) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1209 22:49:03.240454   36778 main.go:141] libmachine: (ha-920193) DBG |     <dhcp>
	I1209 22:49:03.240491   36778 main.go:141] libmachine: (ha-920193) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1209 22:49:03.240508   36778 main.go:141] libmachine: (ha-920193) DBG |     </dhcp>
	I1209 22:49:03.240522   36778 main.go:141] libmachine: (ha-920193) DBG |   </ip>
	I1209 22:49:03.240532   36778 main.go:141] libmachine: (ha-920193) DBG |   
	I1209 22:49:03.240542   36778 main.go:141] libmachine: (ha-920193) DBG | </network>
	I1209 22:49:03.240557   36778 main.go:141] libmachine: (ha-920193) DBG | 
	I1209 22:49:03.245903   36778 main.go:141] libmachine: (ha-920193) DBG | trying to create private KVM network mk-ha-920193 192.168.39.0/24...
	I1209 22:49:03.312870   36778 main.go:141] libmachine: (ha-920193) DBG | private KVM network mk-ha-920193 192.168.39.0/24 created
	I1209 22:49:03.312901   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.312803   36801 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:03.312925   36778 main.go:141] libmachine: (ha-920193) Setting up store path in /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193 ...
	I1209 22:49:03.312938   36778 main.go:141] libmachine: (ha-920193) Building disk image from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 22:49:03.312960   36778 main.go:141] libmachine: (ha-920193) Downloading /home/jenkins/minikube-integration/19888-18950/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 22:49:03.559720   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.559511   36801 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa...
	I1209 22:49:03.632777   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.632628   36801 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/ha-920193.rawdisk...
	I1209 22:49:03.632808   36778 main.go:141] libmachine: (ha-920193) DBG | Writing magic tar header
	I1209 22:49:03.632868   36778 main.go:141] libmachine: (ha-920193) DBG | Writing SSH key tar header
	I1209 22:49:03.632897   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:03.632735   36801 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193 ...
	I1209 22:49:03.632914   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193 (perms=drwx------)
	I1209 22:49:03.632931   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines (perms=drwxr-xr-x)
	I1209 22:49:03.632938   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube (perms=drwxr-xr-x)
	I1209 22:49:03.632951   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950 (perms=drwxrwxr-x)
	I1209 22:49:03.632959   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 22:49:03.632968   36778 main.go:141] libmachine: (ha-920193) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 22:49:03.632988   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193
	I1209 22:49:03.632996   36778 main.go:141] libmachine: (ha-920193) Creating domain...
	I1209 22:49:03.633013   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines
	I1209 22:49:03.633026   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:03.633034   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950
	I1209 22:49:03.633039   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 22:49:03.633046   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home/jenkins
	I1209 22:49:03.633051   36778 main.go:141] libmachine: (ha-920193) DBG | Checking permissions on dir: /home
	I1209 22:49:03.633058   36778 main.go:141] libmachine: (ha-920193) DBG | Skipping /home - not owner
	I1209 22:49:03.634033   36778 main.go:141] libmachine: (ha-920193) define libvirt domain using xml: 
	I1209 22:49:03.634053   36778 main.go:141] libmachine: (ha-920193) <domain type='kvm'>
	I1209 22:49:03.634063   36778 main.go:141] libmachine: (ha-920193)   <name>ha-920193</name>
	I1209 22:49:03.634077   36778 main.go:141] libmachine: (ha-920193)   <memory unit='MiB'>2200</memory>
	I1209 22:49:03.634087   36778 main.go:141] libmachine: (ha-920193)   <vcpu>2</vcpu>
	I1209 22:49:03.634099   36778 main.go:141] libmachine: (ha-920193)   <features>
	I1209 22:49:03.634108   36778 main.go:141] libmachine: (ha-920193)     <acpi/>
	I1209 22:49:03.634117   36778 main.go:141] libmachine: (ha-920193)     <apic/>
	I1209 22:49:03.634126   36778 main.go:141] libmachine: (ha-920193)     <pae/>
	I1209 22:49:03.634143   36778 main.go:141] libmachine: (ha-920193)     
	I1209 22:49:03.634155   36778 main.go:141] libmachine: (ha-920193)   </features>
	I1209 22:49:03.634163   36778 main.go:141] libmachine: (ha-920193)   <cpu mode='host-passthrough'>
	I1209 22:49:03.634172   36778 main.go:141] libmachine: (ha-920193)   
	I1209 22:49:03.634184   36778 main.go:141] libmachine: (ha-920193)   </cpu>
	I1209 22:49:03.634192   36778 main.go:141] libmachine: (ha-920193)   <os>
	I1209 22:49:03.634200   36778 main.go:141] libmachine: (ha-920193)     <type>hvm</type>
	I1209 22:49:03.634209   36778 main.go:141] libmachine: (ha-920193)     <boot dev='cdrom'/>
	I1209 22:49:03.634217   36778 main.go:141] libmachine: (ha-920193)     <boot dev='hd'/>
	I1209 22:49:03.634226   36778 main.go:141] libmachine: (ha-920193)     <bootmenu enable='no'/>
	I1209 22:49:03.634233   36778 main.go:141] libmachine: (ha-920193)   </os>
	I1209 22:49:03.634241   36778 main.go:141] libmachine: (ha-920193)   <devices>
	I1209 22:49:03.634250   36778 main.go:141] libmachine: (ha-920193)     <disk type='file' device='cdrom'>
	I1209 22:49:03.634279   36778 main.go:141] libmachine: (ha-920193)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/boot2docker.iso'/>
	I1209 22:49:03.634301   36778 main.go:141] libmachine: (ha-920193)       <target dev='hdc' bus='scsi'/>
	I1209 22:49:03.634316   36778 main.go:141] libmachine: (ha-920193)       <readonly/>
	I1209 22:49:03.634323   36778 main.go:141] libmachine: (ha-920193)     </disk>
	I1209 22:49:03.634332   36778 main.go:141] libmachine: (ha-920193)     <disk type='file' device='disk'>
	I1209 22:49:03.634344   36778 main.go:141] libmachine: (ha-920193)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 22:49:03.634359   36778 main.go:141] libmachine: (ha-920193)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/ha-920193.rawdisk'/>
	I1209 22:49:03.634367   36778 main.go:141] libmachine: (ha-920193)       <target dev='hda' bus='virtio'/>
	I1209 22:49:03.634375   36778 main.go:141] libmachine: (ha-920193)     </disk>
	I1209 22:49:03.634383   36778 main.go:141] libmachine: (ha-920193)     <interface type='network'>
	I1209 22:49:03.634391   36778 main.go:141] libmachine: (ha-920193)       <source network='mk-ha-920193'/>
	I1209 22:49:03.634409   36778 main.go:141] libmachine: (ha-920193)       <model type='virtio'/>
	I1209 22:49:03.634421   36778 main.go:141] libmachine: (ha-920193)     </interface>
	I1209 22:49:03.634431   36778 main.go:141] libmachine: (ha-920193)     <interface type='network'>
	I1209 22:49:03.634442   36778 main.go:141] libmachine: (ha-920193)       <source network='default'/>
	I1209 22:49:03.634452   36778 main.go:141] libmachine: (ha-920193)       <model type='virtio'/>
	I1209 22:49:03.634463   36778 main.go:141] libmachine: (ha-920193)     </interface>
	I1209 22:49:03.634473   36778 main.go:141] libmachine: (ha-920193)     <serial type='pty'>
	I1209 22:49:03.634484   36778 main.go:141] libmachine: (ha-920193)       <target port='0'/>
	I1209 22:49:03.634498   36778 main.go:141] libmachine: (ha-920193)     </serial>
	I1209 22:49:03.634535   36778 main.go:141] libmachine: (ha-920193)     <console type='pty'>
	I1209 22:49:03.634561   36778 main.go:141] libmachine: (ha-920193)       <target type='serial' port='0'/>
	I1209 22:49:03.634581   36778 main.go:141] libmachine: (ha-920193)     </console>
	I1209 22:49:03.634592   36778 main.go:141] libmachine: (ha-920193)     <rng model='virtio'>
	I1209 22:49:03.634601   36778 main.go:141] libmachine: (ha-920193)       <backend model='random'>/dev/random</backend>
	I1209 22:49:03.634611   36778 main.go:141] libmachine: (ha-920193)     </rng>
	I1209 22:49:03.634621   36778 main.go:141] libmachine: (ha-920193)     
	I1209 22:49:03.634629   36778 main.go:141] libmachine: (ha-920193)     
	I1209 22:49:03.634634   36778 main.go:141] libmachine: (ha-920193)   </devices>
	I1209 22:49:03.634641   36778 main.go:141] libmachine: (ha-920193) </domain>
	I1209 22:49:03.634660   36778 main.go:141] libmachine: (ha-920193) 
	I1209 22:49:03.638977   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:88:5b:26 in network default
	I1209 22:49:03.639478   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:03.639517   36778 main.go:141] libmachine: (ha-920193) Ensuring networks are active...
	I1209 22:49:03.640151   36778 main.go:141] libmachine: (ha-920193) Ensuring network default is active
	I1209 22:49:03.640468   36778 main.go:141] libmachine: (ha-920193) Ensuring network mk-ha-920193 is active
	I1209 22:49:03.640970   36778 main.go:141] libmachine: (ha-920193) Getting domain xml...
	I1209 22:49:03.641682   36778 main.go:141] libmachine: (ha-920193) Creating domain...
	I1209 22:49:04.829698   36778 main.go:141] libmachine: (ha-920193) Waiting to get IP...
	I1209 22:49:04.830434   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:04.830835   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:04.830867   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:04.830824   36801 retry.go:31] will retry after 207.081791ms: waiting for machine to come up
	I1209 22:49:05.039144   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:05.039519   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:05.039585   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:05.039471   36801 retry.go:31] will retry after 281.967291ms: waiting for machine to come up
	I1209 22:49:05.322964   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:05.323366   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:05.323382   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:05.323322   36801 retry.go:31] will retry after 481.505756ms: waiting for machine to come up
	I1209 22:49:05.805961   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:05.806356   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:05.806376   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:05.806314   36801 retry.go:31] will retry after 549.592497ms: waiting for machine to come up
	I1209 22:49:06.357773   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:06.358284   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:06.358319   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:06.358243   36801 retry.go:31] will retry after 535.906392ms: waiting for machine to come up
	I1209 22:49:06.896232   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:06.896608   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:06.896631   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:06.896560   36801 retry.go:31] will retry after 874.489459ms: waiting for machine to come up
	I1209 22:49:07.772350   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:07.772754   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:07.772787   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:07.772706   36801 retry.go:31] will retry after 1.162571844s: waiting for machine to come up
	I1209 22:49:08.936520   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:08.936889   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:08.936917   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:08.936873   36801 retry.go:31] will retry after 1.45755084s: waiting for machine to come up
	I1209 22:49:10.396453   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:10.396871   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:10.396892   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:10.396843   36801 retry.go:31] will retry after 1.609479332s: waiting for machine to come up
	I1209 22:49:12.008693   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:12.009140   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:12.009166   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:12.009087   36801 retry.go:31] will retry after 2.268363531s: waiting for machine to come up
	I1209 22:49:14.279389   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:14.279856   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:14.279912   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:14.279851   36801 retry.go:31] will retry after 2.675009942s: waiting for machine to come up
	I1209 22:49:16.957696   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:16.958066   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:16.958096   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:16.958013   36801 retry.go:31] will retry after 2.665510056s: waiting for machine to come up
	I1209 22:49:19.624784   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:19.625187   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:19.625202   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:19.625166   36801 retry.go:31] will retry after 2.857667417s: waiting for machine to come up
	I1209 22:49:22.486137   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:22.486540   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find current IP address of domain ha-920193 in network mk-ha-920193
	I1209 22:49:22.486563   36778 main.go:141] libmachine: (ha-920193) DBG | I1209 22:49:22.486493   36801 retry.go:31] will retry after 4.026256687s: waiting for machine to come up
	I1209 22:49:26.516409   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.516832   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has current primary IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.516858   36778 main.go:141] libmachine: (ha-920193) Found IP for machine: 192.168.39.102
	I1209 22:49:26.516892   36778 main.go:141] libmachine: (ha-920193) Reserving static IP address...
	I1209 22:49:26.517220   36778 main.go:141] libmachine: (ha-920193) DBG | unable to find host DHCP lease matching {name: "ha-920193", mac: "52:54:00:eb:3c:cb", ip: "192.168.39.102"} in network mk-ha-920193
	I1209 22:49:26.587512   36778 main.go:141] libmachine: (ha-920193) DBG | Getting to WaitForSSH function...
	I1209 22:49:26.587538   36778 main.go:141] libmachine: (ha-920193) Reserved static IP address: 192.168.39.102
	I1209 22:49:26.587551   36778 main.go:141] libmachine: (ha-920193) Waiting for SSH to be available...
	I1209 22:49:26.589724   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.590056   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:minikube Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:26.590080   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.590252   36778 main.go:141] libmachine: (ha-920193) DBG | Using SSH client type: external
	I1209 22:49:26.590281   36778 main.go:141] libmachine: (ha-920193) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa (-rw-------)
	I1209 22:49:26.590312   36778 main.go:141] libmachine: (ha-920193) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 22:49:26.590335   36778 main.go:141] libmachine: (ha-920193) DBG | About to run SSH command:
	I1209 22:49:26.590368   36778 main.go:141] libmachine: (ha-920193) DBG | exit 0
	I1209 22:49:26.707404   36778 main.go:141] libmachine: (ha-920193) DBG | SSH cmd err, output: <nil>: 
	I1209 22:49:26.707687   36778 main.go:141] libmachine: (ha-920193) KVM machine creation complete!
	I1209 22:49:26.708024   36778 main.go:141] libmachine: (ha-920193) Calling .GetConfigRaw
	I1209 22:49:26.708523   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:26.708739   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:26.708918   36778 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 22:49:26.708931   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:49:26.710113   36778 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 22:49:26.710125   36778 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 22:49:26.710130   36778 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 22:49:26.710135   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:26.712426   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.712765   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:26.712791   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.712925   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:26.713081   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.713185   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.713306   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:26.713452   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:26.713680   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:26.713692   36778 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 22:49:26.806695   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:49:26.806717   36778 main.go:141] libmachine: Detecting the provisioner...
	I1209 22:49:26.806725   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:26.809366   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.809767   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:26.809800   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.809958   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:26.810141   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.810311   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.810444   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:26.810627   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:26.810776   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:26.810787   36778 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 22:49:26.908040   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 22:49:26.908090   36778 main.go:141] libmachine: found compatible host: buildroot
	I1209 22:49:26.908097   36778 main.go:141] libmachine: Provisioning with buildroot...
	I1209 22:49:26.908104   36778 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:49:26.908364   36778 buildroot.go:166] provisioning hostname "ha-920193"
	I1209 22:49:26.908392   36778 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:49:26.908590   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:26.911118   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.911513   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:26.911538   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:26.911715   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:26.911868   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.911989   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:26.912100   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:26.912224   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:26.912420   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:26.912438   36778 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-920193 && echo "ha-920193" | sudo tee /etc/hostname
	I1209 22:49:27.020773   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-920193
	
	I1209 22:49:27.020799   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.023575   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.023846   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.023871   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.024029   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.024220   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.024374   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.024530   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.024691   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:27.024872   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:27.024888   36778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-920193' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-920193/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-920193' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 22:49:27.127613   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:49:27.127642   36778 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 22:49:27.127660   36778 buildroot.go:174] setting up certificates
	I1209 22:49:27.127691   36778 provision.go:84] configureAuth start
	I1209 22:49:27.127710   36778 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:49:27.127961   36778 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:49:27.130248   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.130591   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.130619   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.130738   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.132923   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.133247   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.133271   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.133422   36778 provision.go:143] copyHostCerts
	I1209 22:49:27.133461   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:49:27.133491   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 22:49:27.133506   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:49:27.133573   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 22:49:27.133653   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:49:27.133670   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 22:49:27.133677   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:49:27.133702   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 22:49:27.133745   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:49:27.133761   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 22:49:27.133767   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:49:27.133788   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 22:49:27.133835   36778 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.ha-920193 san=[127.0.0.1 192.168.39.102 ha-920193 localhost minikube]
	I1209 22:49:27.297434   36778 provision.go:177] copyRemoteCerts
	I1209 22:49:27.297494   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 22:49:27.297515   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.300069   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.300424   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.300443   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.300615   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.300792   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.300928   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.301029   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:27.378773   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 22:49:27.378830   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1209 22:49:27.403553   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 22:49:27.403627   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 22:49:27.425459   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 22:49:27.425526   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 22:49:27.449197   36778 provision.go:87] duration metric: took 321.487984ms to configureAuth
	I1209 22:49:27.449229   36778 buildroot.go:189] setting minikube options for container-runtime
	I1209 22:49:27.449449   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:49:27.449534   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.453191   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.453559   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.453595   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.453759   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.453939   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.454070   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.454184   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.454317   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:27.454513   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:27.454534   36778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 22:49:27.653703   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 22:49:27.653733   36778 main.go:141] libmachine: Checking connection to Docker...
	I1209 22:49:27.653756   36778 main.go:141] libmachine: (ha-920193) Calling .GetURL
	I1209 22:49:27.655032   36778 main.go:141] libmachine: (ha-920193) DBG | Using libvirt version 6000000
	I1209 22:49:27.657160   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.657463   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.657491   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.657682   36778 main.go:141] libmachine: Docker is up and running!
	I1209 22:49:27.657699   36778 main.go:141] libmachine: Reticulating splines...
	I1209 22:49:27.657708   36778 client.go:171] duration metric: took 24.420875377s to LocalClient.Create
	I1209 22:49:27.657735   36778 start.go:167] duration metric: took 24.420942176s to libmachine.API.Create "ha-920193"
	I1209 22:49:27.657747   36778 start.go:293] postStartSetup for "ha-920193" (driver="kvm2")
	I1209 22:49:27.657761   36778 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 22:49:27.657785   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.657983   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 22:49:27.658006   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.659917   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.660172   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.660200   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.660370   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.660519   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.660646   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.660782   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:27.737935   36778 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 22:49:27.741969   36778 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 22:49:27.741998   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 22:49:27.742081   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 22:49:27.742178   36778 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 22:49:27.742190   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /etc/ssl/certs/262532.pem
	I1209 22:49:27.742316   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 22:49:27.752769   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:49:27.776187   36778 start.go:296] duration metric: took 118.424893ms for postStartSetup
	I1209 22:49:27.776233   36778 main.go:141] libmachine: (ha-920193) Calling .GetConfigRaw
	I1209 22:49:27.776813   36778 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:49:27.779433   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.779777   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.779809   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.780018   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:49:27.780196   36778 start.go:128] duration metric: took 24.562298059s to createHost
	I1209 22:49:27.780219   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.782389   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.782713   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.782737   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.782928   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.783093   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.783255   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.783378   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.783531   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:49:27.783762   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:49:27.783780   36778 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 22:49:27.880035   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733784567.857266275
	
	I1209 22:49:27.880058   36778 fix.go:216] guest clock: 1733784567.857266275
	I1209 22:49:27.880065   36778 fix.go:229] Guest: 2024-12-09 22:49:27.857266275 +0000 UTC Remote: 2024-12-09 22:49:27.780207864 +0000 UTC m=+24.672894470 (delta=77.058411ms)
	I1209 22:49:27.880084   36778 fix.go:200] guest clock delta is within tolerance: 77.058411ms
	I1209 22:49:27.880088   36778 start.go:83] releasing machines lock for "ha-920193", held for 24.662297943s
	I1209 22:49:27.880110   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.880381   36778 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:49:27.883090   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.883418   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.883452   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.883630   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.884081   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.884211   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:27.884272   36778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 22:49:27.884329   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.884381   36778 ssh_runner.go:195] Run: cat /version.json
	I1209 22:49:27.884403   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:27.886622   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.886872   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.886899   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.886994   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.887039   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.887207   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.887321   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:27.887333   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.887353   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:27.887479   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:27.887529   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:27.887692   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:27.887829   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:27.887976   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:27.963462   36778 ssh_runner.go:195] Run: systemctl --version
	I1209 22:49:27.986028   36778 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 22:49:28.143161   36778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 22:49:28.149221   36778 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 22:49:28.149289   36778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 22:49:28.165410   36778 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 22:49:28.165442   36778 start.go:495] detecting cgroup driver to use...
	I1209 22:49:28.165509   36778 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 22:49:28.181384   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 22:49:28.195011   36778 docker.go:217] disabling cri-docker service (if available) ...
	I1209 22:49:28.195063   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 22:49:28.208554   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 22:49:28.222230   36778 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 22:49:28.338093   36778 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 22:49:28.483809   36778 docker.go:233] disabling docker service ...
	I1209 22:49:28.483868   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 22:49:28.497723   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 22:49:28.510133   36778 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 22:49:28.637703   36778 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 22:49:28.768621   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 22:49:28.781961   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 22:49:28.799140   36778 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 22:49:28.799205   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.808634   36778 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 22:49:28.808697   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.818355   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.827780   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.837191   36778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 22:49:28.846758   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.856291   36778 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.872403   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:49:28.881716   36778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 22:49:28.890298   36778 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 22:49:28.890355   36778 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 22:49:28.902738   36778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 22:49:28.911729   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:49:29.013922   36778 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 22:49:29.106638   36778 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 22:49:29.106719   36778 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 22:49:29.111193   36778 start.go:563] Will wait 60s for crictl version
	I1209 22:49:29.111261   36778 ssh_runner.go:195] Run: which crictl
	I1209 22:49:29.115298   36778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 22:49:29.151109   36778 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 22:49:29.151178   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:49:29.178245   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:49:29.206246   36778 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 22:49:29.207478   36778 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:49:29.209787   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:29.210134   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:29.210160   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:29.210332   36778 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 22:49:29.214243   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:49:29.226620   36778 kubeadm.go:883] updating cluster {Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 22:49:29.226723   36778 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:49:29.226766   36778 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 22:49:29.257928   36778 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 22:49:29.257999   36778 ssh_runner.go:195] Run: which lz4
	I1209 22:49:29.261848   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1209 22:49:29.261955   36778 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 22:49:29.265782   36778 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 22:49:29.265814   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 22:49:30.441006   36778 crio.go:462] duration metric: took 1.179084887s to copy over tarball
	I1209 22:49:30.441074   36778 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 22:49:32.468580   36778 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.027482243s)
	I1209 22:49:32.468624   36778 crio.go:469] duration metric: took 2.027585779s to extract the tarball
	I1209 22:49:32.468641   36778 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 22:49:32.505123   36778 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 22:49:32.547324   36778 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 22:49:32.547346   36778 cache_images.go:84] Images are preloaded, skipping loading
	I1209 22:49:32.547353   36778 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.31.2 crio true true} ...
	I1209 22:49:32.547438   36778 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-920193 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 22:49:32.547498   36778 ssh_runner.go:195] Run: crio config
	I1209 22:49:32.589945   36778 cni.go:84] Creating CNI manager for ""
	I1209 22:49:32.589970   36778 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 22:49:32.589982   36778 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 22:49:32.590011   36778 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-920193 NodeName:ha-920193 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 22:49:32.590137   36778 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-920193"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.102"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 22:49:32.590159   36778 kube-vip.go:115] generating kube-vip config ...
	I1209 22:49:32.590202   36778 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 22:49:32.605724   36778 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 22:49:32.605829   36778 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1209 22:49:32.605883   36778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 22:49:32.615285   36778 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 22:49:32.615345   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1209 22:49:32.624299   36778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1209 22:49:32.639876   36778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 22:49:32.656137   36778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1209 22:49:32.672494   36778 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1209 22:49:32.688039   36778 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 22:49:32.691843   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:49:32.703440   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:49:32.825661   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:49:32.842362   36778 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193 for IP: 192.168.39.102
	I1209 22:49:32.842387   36778 certs.go:194] generating shared ca certs ...
	I1209 22:49:32.842404   36778 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:32.842561   36778 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 22:49:32.842601   36778 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 22:49:32.842611   36778 certs.go:256] generating profile certs ...
	I1209 22:49:32.842674   36778 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key
	I1209 22:49:32.842693   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt with IP's: []
	I1209 22:49:32.980963   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt ...
	I1209 22:49:32.980992   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt: {Name:mkd9ec798303363f6538acfc05f1a5f04066e731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:32.981176   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key ...
	I1209 22:49:32.981188   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key: {Name:mk056f923a34783de09213845e376bddce6f3df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:32.981268   36778 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.ec574f19
	I1209 22:49:32.981285   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.ec574f19 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.254]
	I1209 22:49:33.242216   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.ec574f19 ...
	I1209 22:49:33.242250   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.ec574f19: {Name:mk7179026523f0b057d26b52e40a5885ad95d8ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:33.242434   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.ec574f19 ...
	I1209 22:49:33.242448   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.ec574f19: {Name:mk65609d82220269362f492c0a2d0cc4da8d1447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:33.242525   36778 certs.go:381] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.ec574f19 -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt
	I1209 22:49:33.242596   36778 certs.go:385] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.ec574f19 -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key
	I1209 22:49:33.242650   36778 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key
	I1209 22:49:33.242665   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt with IP's: []
	I1209 22:49:33.389277   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt ...
	I1209 22:49:33.389307   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt: {Name:mk8b70654b36de7093b054b1d0d39a95b39d45fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:33.389473   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key ...
	I1209 22:49:33.389485   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key: {Name:mk4ec3e3be54da03f1d1683c75f10f14c0904ad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:33.389559   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 22:49:33.389576   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 22:49:33.389587   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 22:49:33.389600   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 22:49:33.389610   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 22:49:33.389620   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 22:49:33.389632   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 22:49:33.389642   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 22:49:33.389693   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 22:49:33.389729   36778 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 22:49:33.389739   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 22:49:33.389758   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 22:49:33.389781   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 22:49:33.389801   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 22:49:33.389837   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:49:33.389863   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:49:33.389878   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem -> /usr/share/ca-certificates/26253.pem
	I1209 22:49:33.389890   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /usr/share/ca-certificates/262532.pem
	I1209 22:49:33.390445   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 22:49:33.414470   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 22:49:33.436920   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 22:49:33.458977   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 22:49:33.481846   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 22:49:33.503907   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 22:49:33.525852   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 22:49:33.548215   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 22:49:33.569802   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 22:49:33.602465   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 22:49:33.628007   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 22:49:33.653061   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 22:49:33.668632   36778 ssh_runner.go:195] Run: openssl version
	I1209 22:49:33.674257   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 22:49:33.684380   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 22:49:33.688650   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 22:49:33.688714   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 22:49:33.694036   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 22:49:33.704144   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 22:49:33.714060   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:49:33.718184   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:49:33.718227   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:49:33.723730   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 22:49:33.734203   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 22:49:33.744729   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 22:49:33.749033   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 22:49:33.749080   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 22:49:33.754563   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 22:49:33.764859   36778 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 22:49:33.768876   36778 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 22:49:33.768937   36778 kubeadm.go:392] StartCluster: {Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:49:33.769036   36778 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 22:49:33.769105   36778 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 22:49:33.804100   36778 cri.go:89] found id: ""
	I1209 22:49:33.804165   36778 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 22:49:33.814344   36778 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 22:49:33.824218   36778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 22:49:33.834084   36778 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 22:49:33.834106   36778 kubeadm.go:157] found existing configuration files:
	
	I1209 22:49:33.834157   36778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 22:49:33.843339   36778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 22:49:33.843379   36778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 22:49:33.853049   36778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 22:49:33.862222   36778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 22:49:33.862280   36778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 22:49:33.872041   36778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 22:49:33.881416   36778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 22:49:33.881475   36778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 22:49:33.891237   36778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 22:49:33.900609   36778 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 22:49:33.900659   36778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 22:49:33.910089   36778 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 22:49:34.000063   36778 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 22:49:34.000183   36778 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 22:49:34.091544   36778 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 22:49:34.091739   36778 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 22:49:34.091892   36778 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 22:49:34.100090   36778 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 22:49:34.102871   36778 out.go:235]   - Generating certificates and keys ...
	I1209 22:49:34.103528   36778 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 22:49:34.103648   36778 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 22:49:34.284340   36778 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 22:49:34.462874   36778 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 22:49:34.647453   36778 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 22:49:34.787984   36778 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 22:49:35.020609   36778 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 22:49:35.020761   36778 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-920193 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I1209 22:49:35.078800   36778 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 22:49:35.078977   36778 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-920193 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I1209 22:49:35.150500   36778 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 22:49:35.230381   36778 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 22:49:35.499235   36778 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 22:49:35.499319   36778 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 22:49:35.912886   36778 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 22:49:36.241120   36778 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 22:49:36.405939   36778 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 22:49:36.604047   36778 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 22:49:36.814671   36778 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 22:49:36.815164   36778 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 22:49:36.818373   36778 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 22:49:36.820325   36778 out.go:235]   - Booting up control plane ...
	I1209 22:49:36.820430   36778 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 22:49:36.820522   36778 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 22:49:36.821468   36778 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 22:49:36.841330   36778 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 22:49:36.848308   36778 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 22:49:36.848421   36778 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 22:49:36.995410   36778 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 22:49:36.995535   36778 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 22:49:37.995683   36778 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001015441s
	I1209 22:49:37.995786   36778 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 22:49:43.754200   36778 kubeadm.go:310] [api-check] The API server is healthy after 5.761609039s
	I1209 22:49:43.767861   36778 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 22:49:43.785346   36778 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 22:49:43.810025   36778 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 22:49:43.810266   36778 kubeadm.go:310] [mark-control-plane] Marking the node ha-920193 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 22:49:43.821256   36778 kubeadm.go:310] [bootstrap-token] Using token: 72yxn0.qrsfcagkngfj4gxi
	I1209 22:49:43.822572   36778 out.go:235]   - Configuring RBAC rules ...
	I1209 22:49:43.822691   36778 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 22:49:43.832707   36778 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 22:49:43.844059   36778 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 22:49:43.846995   36778 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 22:49:43.849841   36778 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 22:49:43.856257   36778 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 22:49:44.160151   36778 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 22:49:44.591740   36778 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 22:49:45.161509   36778 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 22:49:45.162464   36778 kubeadm.go:310] 
	I1209 22:49:45.162543   36778 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 22:49:45.162552   36778 kubeadm.go:310] 
	I1209 22:49:45.162641   36778 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 22:49:45.162653   36778 kubeadm.go:310] 
	I1209 22:49:45.162689   36778 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 22:49:45.162763   36778 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 22:49:45.162845   36778 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 22:49:45.162856   36778 kubeadm.go:310] 
	I1209 22:49:45.162934   36778 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 22:49:45.162944   36778 kubeadm.go:310] 
	I1209 22:49:45.163005   36778 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 22:49:45.163016   36778 kubeadm.go:310] 
	I1209 22:49:45.163084   36778 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 22:49:45.163184   36778 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 22:49:45.163290   36778 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 22:49:45.163301   36778 kubeadm.go:310] 
	I1209 22:49:45.163412   36778 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 22:49:45.163482   36778 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 22:49:45.163488   36778 kubeadm.go:310] 
	I1209 22:49:45.163586   36778 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 72yxn0.qrsfcagkngfj4gxi \
	I1209 22:49:45.163727   36778 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b \
	I1209 22:49:45.163762   36778 kubeadm.go:310] 	--control-plane 
	I1209 22:49:45.163771   36778 kubeadm.go:310] 
	I1209 22:49:45.163891   36778 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 22:49:45.163902   36778 kubeadm.go:310] 
	I1209 22:49:45.164042   36778 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 72yxn0.qrsfcagkngfj4gxi \
	I1209 22:49:45.164198   36778 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b 
	I1209 22:49:45.164453   36778 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 22:49:45.164487   36778 cni.go:84] Creating CNI manager for ""
	I1209 22:49:45.164497   36778 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 22:49:45.166869   36778 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1209 22:49:45.168578   36778 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 22:49:45.173867   36778 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1209 22:49:45.173890   36778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 22:49:45.193577   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 22:49:45.540330   36778 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 22:49:45.540400   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:45.540429   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-920193 minikube.k8s.io/updated_at=2024_12_09T22_49_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=ha-920193 minikube.k8s.io/primary=true
	I1209 22:49:45.563713   36778 ops.go:34] apiserver oom_adj: -16
	I1209 22:49:45.755027   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:46.255384   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:46.755819   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:47.255436   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:47.755914   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:48.255404   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:48.755938   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:49.255745   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 22:49:49.346913   36778 kubeadm.go:1113] duration metric: took 3.806571287s to wait for elevateKubeSystemPrivileges
	I1209 22:49:49.346942   36778 kubeadm.go:394] duration metric: took 15.578011127s to StartCluster
	I1209 22:49:49.346958   36778 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:49.347032   36778 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:49:49.347686   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:49:49.347889   36778 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:49:49.347901   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 22:49:49.347912   36778 start.go:241] waiting for startup goroutines ...
	I1209 22:49:49.347916   36778 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 22:49:49.347997   36778 addons.go:69] Setting storage-provisioner=true in profile "ha-920193"
	I1209 22:49:49.348008   36778 addons.go:69] Setting default-storageclass=true in profile "ha-920193"
	I1209 22:49:49.348018   36778 addons.go:234] Setting addon storage-provisioner=true in "ha-920193"
	I1209 22:49:49.348025   36778 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-920193"
	I1209 22:49:49.348059   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:49:49.348092   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:49:49.348366   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.348401   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.348486   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.348504   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.364294   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41179
	I1209 22:49:49.364762   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40635
	I1209 22:49:49.364808   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.365192   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.365331   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.365359   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.365654   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.365671   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.365700   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.365855   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:49:49.366017   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.366436   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.366477   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.367841   36778 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:49:49.368072   36778 kapi.go:59] client config for ha-920193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key", CAFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c9a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 22:49:49.368506   36778 cert_rotation.go:140] Starting client certificate rotation controller
	I1209 22:49:49.368728   36778 addons.go:234] Setting addon default-storageclass=true in "ha-920193"
	I1209 22:49:49.368759   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:49:49.368995   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.369045   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.381548   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44341
	I1209 22:49:49.382048   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.382623   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.382650   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.382946   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.383123   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:49:49.384085   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35753
	I1209 22:49:49.384563   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.385002   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:49.385074   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.385099   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.385406   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.385869   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:49.385898   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:49.387093   36778 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 22:49:49.388363   36778 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 22:49:49.388378   36778 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 22:49:49.388396   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:49.391382   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:49.391959   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:49.391988   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:49.392168   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:49.392369   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:49.392529   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:49.392718   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:49.402583   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37505
	I1209 22:49:49.403101   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:49.403703   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:49.403733   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:49.404140   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:49.404327   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:49:49.406048   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:49:49.406246   36778 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 22:49:49.406264   36778 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 22:49:49.406283   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:49:49.409015   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:49.409417   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:49:49.409445   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:49:49.409566   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:49:49.409736   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:49:49.409906   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:49:49.410051   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:49:49.469421   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 22:49:49.523797   36778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 22:49:49.572493   36778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 22:49:49.935058   36778 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1209 22:49:50.246776   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.246808   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.246866   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.246889   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.247109   36778 main.go:141] libmachine: (ha-920193) DBG | Closing plugin on server side
	I1209 22:49:50.247126   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.247142   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.247149   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.247150   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.247161   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.247168   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.247161   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.247214   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.247452   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.247465   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.247474   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.247491   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.247524   36778 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 22:49:50.247539   36778 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 22:49:50.247452   36778 main.go:141] libmachine: (ha-920193) DBG | Closing plugin on server side
	I1209 22:49:50.247679   36778 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1209 22:49:50.247688   36778 round_trippers.go:469] Request Headers:
	I1209 22:49:50.247699   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:49:50.247705   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:49:50.258818   36778 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1209 22:49:50.259388   36778 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1209 22:49:50.259405   36778 round_trippers.go:469] Request Headers:
	I1209 22:49:50.259415   36778 round_trippers.go:473]     Content-Type: application/json
	I1209 22:49:50.259421   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:49:50.259427   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:49:50.263578   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:49:50.263947   36778 main.go:141] libmachine: Making call to close driver server
	I1209 22:49:50.263973   36778 main.go:141] libmachine: (ha-920193) Calling .Close
	I1209 22:49:50.264222   36778 main.go:141] libmachine: Successfully made call to close driver server
	I1209 22:49:50.264298   36778 main.go:141] libmachine: (ha-920193) DBG | Closing plugin on server side
	I1209 22:49:50.264309   36778 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 22:49:50.266759   36778 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1209 22:49:50.268058   36778 addons.go:510] duration metric: took 920.142906ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 22:49:50.268097   36778 start.go:246] waiting for cluster config update ...
	I1209 22:49:50.268112   36778 start.go:255] writing updated cluster config ...
	I1209 22:49:50.269702   36778 out.go:201] 
	I1209 22:49:50.271046   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:49:50.271126   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:49:50.272711   36778 out.go:177] * Starting "ha-920193-m02" control-plane node in "ha-920193" cluster
	I1209 22:49:50.273838   36778 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:49:50.273861   36778 cache.go:56] Caching tarball of preloaded images
	I1209 22:49:50.273946   36778 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 22:49:50.273960   36778 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 22:49:50.274036   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:49:50.274220   36778 start.go:360] acquireMachinesLock for ha-920193-m02: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 22:49:50.274272   36778 start.go:364] duration metric: took 30.506µs to acquireMachinesLock for "ha-920193-m02"
	I1209 22:49:50.274296   36778 start.go:93] Provisioning new machine with config: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:49:50.274418   36778 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1209 22:49:50.275986   36778 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 22:49:50.276071   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:49:50.276101   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:49:50.290689   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42005
	I1209 22:49:50.291090   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:49:50.291624   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:49:50.291657   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:49:50.291974   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:49:50.292165   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetMachineName
	I1209 22:49:50.292290   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:49:50.292460   36778 start.go:159] libmachine.API.Create for "ha-920193" (driver="kvm2")
	I1209 22:49:50.292488   36778 client.go:168] LocalClient.Create starting
	I1209 22:49:50.292523   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem
	I1209 22:49:50.292562   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:49:50.292580   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:49:50.292650   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem
	I1209 22:49:50.292677   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:49:50.292694   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:49:50.292719   36778 main.go:141] libmachine: Running pre-create checks...
	I1209 22:49:50.292730   36778 main.go:141] libmachine: (ha-920193-m02) Calling .PreCreateCheck
	I1209 22:49:50.292863   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetConfigRaw
	I1209 22:49:50.293207   36778 main.go:141] libmachine: Creating machine...
	I1209 22:49:50.293220   36778 main.go:141] libmachine: (ha-920193-m02) Calling .Create
	I1209 22:49:50.293319   36778 main.go:141] libmachine: (ha-920193-m02) Creating KVM machine...
	I1209 22:49:50.294569   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found existing default KVM network
	I1209 22:49:50.294708   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found existing private KVM network mk-ha-920193
	I1209 22:49:50.294863   36778 main.go:141] libmachine: (ha-920193-m02) Setting up store path in /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02 ...
	I1209 22:49:50.294888   36778 main.go:141] libmachine: (ha-920193-m02) Building disk image from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 22:49:50.294937   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:50.294840   37166 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:50.295026   36778 main.go:141] libmachine: (ha-920193-m02) Downloading /home/jenkins/minikube-integration/19888-18950/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 22:49:50.540657   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:50.540505   37166 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa...
	I1209 22:49:50.636978   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:50.636881   37166 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/ha-920193-m02.rawdisk...
	I1209 22:49:50.637002   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Writing magic tar header
	I1209 22:49:50.637012   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Writing SSH key tar header
	I1209 22:49:50.637092   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:50.637012   37166 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02 ...
	I1209 22:49:50.637134   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02
	I1209 22:49:50.637167   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02 (perms=drwx------)
	I1209 22:49:50.637189   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines (perms=drwxr-xr-x)
	I1209 22:49:50.637210   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines
	I1209 22:49:50.637225   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube (perms=drwxr-xr-x)
	I1209 22:49:50.637240   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950 (perms=drwxrwxr-x)
	I1209 22:49:50.637251   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 22:49:50.637263   36778 main.go:141] libmachine: (ha-920193-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 22:49:50.637274   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:49:50.637286   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950
	I1209 22:49:50.637298   36778 main.go:141] libmachine: (ha-920193-m02) Creating domain...
	I1209 22:49:50.637312   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 22:49:50.637321   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home/jenkins
	I1209 22:49:50.637330   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Checking permissions on dir: /home
	I1209 22:49:50.637341   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Skipping /home - not owner
	I1209 22:49:50.638225   36778 main.go:141] libmachine: (ha-920193-m02) define libvirt domain using xml: 
	I1209 22:49:50.638247   36778 main.go:141] libmachine: (ha-920193-m02) <domain type='kvm'>
	I1209 22:49:50.638255   36778 main.go:141] libmachine: (ha-920193-m02)   <name>ha-920193-m02</name>
	I1209 22:49:50.638263   36778 main.go:141] libmachine: (ha-920193-m02)   <memory unit='MiB'>2200</memory>
	I1209 22:49:50.638271   36778 main.go:141] libmachine: (ha-920193-m02)   <vcpu>2</vcpu>
	I1209 22:49:50.638284   36778 main.go:141] libmachine: (ha-920193-m02)   <features>
	I1209 22:49:50.638291   36778 main.go:141] libmachine: (ha-920193-m02)     <acpi/>
	I1209 22:49:50.638306   36778 main.go:141] libmachine: (ha-920193-m02)     <apic/>
	I1209 22:49:50.638319   36778 main.go:141] libmachine: (ha-920193-m02)     <pae/>
	I1209 22:49:50.638328   36778 main.go:141] libmachine: (ha-920193-m02)     
	I1209 22:49:50.638333   36778 main.go:141] libmachine: (ha-920193-m02)   </features>
	I1209 22:49:50.638340   36778 main.go:141] libmachine: (ha-920193-m02)   <cpu mode='host-passthrough'>
	I1209 22:49:50.638346   36778 main.go:141] libmachine: (ha-920193-m02)   
	I1209 22:49:50.638356   36778 main.go:141] libmachine: (ha-920193-m02)   </cpu>
	I1209 22:49:50.638364   36778 main.go:141] libmachine: (ha-920193-m02)   <os>
	I1209 22:49:50.638380   36778 main.go:141] libmachine: (ha-920193-m02)     <type>hvm</type>
	I1209 22:49:50.638393   36778 main.go:141] libmachine: (ha-920193-m02)     <boot dev='cdrom'/>
	I1209 22:49:50.638403   36778 main.go:141] libmachine: (ha-920193-m02)     <boot dev='hd'/>
	I1209 22:49:50.638426   36778 main.go:141] libmachine: (ha-920193-m02)     <bootmenu enable='no'/>
	I1209 22:49:50.638448   36778 main.go:141] libmachine: (ha-920193-m02)   </os>
	I1209 22:49:50.638464   36778 main.go:141] libmachine: (ha-920193-m02)   <devices>
	I1209 22:49:50.638475   36778 main.go:141] libmachine: (ha-920193-m02)     <disk type='file' device='cdrom'>
	I1209 22:49:50.638507   36778 main.go:141] libmachine: (ha-920193-m02)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/boot2docker.iso'/>
	I1209 22:49:50.638533   36778 main.go:141] libmachine: (ha-920193-m02)       <target dev='hdc' bus='scsi'/>
	I1209 22:49:50.638547   36778 main.go:141] libmachine: (ha-920193-m02)       <readonly/>
	I1209 22:49:50.638559   36778 main.go:141] libmachine: (ha-920193-m02)     </disk>
	I1209 22:49:50.638570   36778 main.go:141] libmachine: (ha-920193-m02)     <disk type='file' device='disk'>
	I1209 22:49:50.638583   36778 main.go:141] libmachine: (ha-920193-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 22:49:50.638601   36778 main.go:141] libmachine: (ha-920193-m02)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/ha-920193-m02.rawdisk'/>
	I1209 22:49:50.638612   36778 main.go:141] libmachine: (ha-920193-m02)       <target dev='hda' bus='virtio'/>
	I1209 22:49:50.638623   36778 main.go:141] libmachine: (ha-920193-m02)     </disk>
	I1209 22:49:50.638632   36778 main.go:141] libmachine: (ha-920193-m02)     <interface type='network'>
	I1209 22:49:50.638641   36778 main.go:141] libmachine: (ha-920193-m02)       <source network='mk-ha-920193'/>
	I1209 22:49:50.638652   36778 main.go:141] libmachine: (ha-920193-m02)       <model type='virtio'/>
	I1209 22:49:50.638661   36778 main.go:141] libmachine: (ha-920193-m02)     </interface>
	I1209 22:49:50.638672   36778 main.go:141] libmachine: (ha-920193-m02)     <interface type='network'>
	I1209 22:49:50.638680   36778 main.go:141] libmachine: (ha-920193-m02)       <source network='default'/>
	I1209 22:49:50.638690   36778 main.go:141] libmachine: (ha-920193-m02)       <model type='virtio'/>
	I1209 22:49:50.638708   36778 main.go:141] libmachine: (ha-920193-m02)     </interface>
	I1209 22:49:50.638726   36778 main.go:141] libmachine: (ha-920193-m02)     <serial type='pty'>
	I1209 22:49:50.638741   36778 main.go:141] libmachine: (ha-920193-m02)       <target port='0'/>
	I1209 22:49:50.638748   36778 main.go:141] libmachine: (ha-920193-m02)     </serial>
	I1209 22:49:50.638756   36778 main.go:141] libmachine: (ha-920193-m02)     <console type='pty'>
	I1209 22:49:50.638764   36778 main.go:141] libmachine: (ha-920193-m02)       <target type='serial' port='0'/>
	I1209 22:49:50.638775   36778 main.go:141] libmachine: (ha-920193-m02)     </console>
	I1209 22:49:50.638784   36778 main.go:141] libmachine: (ha-920193-m02)     <rng model='virtio'>
	I1209 22:49:50.638793   36778 main.go:141] libmachine: (ha-920193-m02)       <backend model='random'>/dev/random</backend>
	I1209 22:49:50.638807   36778 main.go:141] libmachine: (ha-920193-m02)     </rng>
	I1209 22:49:50.638821   36778 main.go:141] libmachine: (ha-920193-m02)     
	I1209 22:49:50.638836   36778 main.go:141] libmachine: (ha-920193-m02)     
	I1209 22:49:50.638854   36778 main.go:141] libmachine: (ha-920193-m02)   </devices>
	I1209 22:49:50.638870   36778 main.go:141] libmachine: (ha-920193-m02) </domain>
	I1209 22:49:50.638881   36778 main.go:141] libmachine: (ha-920193-m02) 
	I1209 22:49:50.645452   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:4e:0e:44 in network default
	I1209 22:49:50.646094   36778 main.go:141] libmachine: (ha-920193-m02) Ensuring networks are active...
	I1209 22:49:50.646118   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:50.646792   36778 main.go:141] libmachine: (ha-920193-m02) Ensuring network default is active
	I1209 22:49:50.647136   36778 main.go:141] libmachine: (ha-920193-m02) Ensuring network mk-ha-920193 is active
	I1209 22:49:50.647479   36778 main.go:141] libmachine: (ha-920193-m02) Getting domain xml...
	I1209 22:49:50.648166   36778 main.go:141] libmachine: (ha-920193-m02) Creating domain...
	I1209 22:49:51.846569   36778 main.go:141] libmachine: (ha-920193-m02) Waiting to get IP...
	I1209 22:49:51.847529   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:51.847984   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:51.848045   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:51.847987   37166 retry.go:31] will retry after 223.150886ms: waiting for machine to come up
	I1209 22:49:52.072674   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:52.073143   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:52.073214   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:52.073119   37166 retry.go:31] will retry after 342.157886ms: waiting for machine to come up
	I1209 22:49:52.416515   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:52.416911   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:52.416933   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:52.416873   37166 retry.go:31] will retry after 319.618715ms: waiting for machine to come up
	I1209 22:49:52.738511   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:52.739067   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:52.739096   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:52.739025   37166 retry.go:31] will retry after 426.813714ms: waiting for machine to come up
	I1209 22:49:53.167672   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:53.168111   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:53.168139   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:53.168063   37166 retry.go:31] will retry after 465.129361ms: waiting for machine to come up
	I1209 22:49:53.634495   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:53.635006   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:53.635033   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:53.634965   37166 retry.go:31] will retry after 688.228763ms: waiting for machine to come up
	I1209 22:49:54.324368   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:54.324751   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:54.324780   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:54.324706   37166 retry.go:31] will retry after 952.948713ms: waiting for machine to come up
	I1209 22:49:55.278732   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:55.279052   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:55.279084   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:55.279025   37166 retry.go:31] will retry after 1.032940312s: waiting for machine to come up
	I1209 22:49:56.313177   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:56.313589   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:56.313613   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:56.313562   37166 retry.go:31] will retry after 1.349167493s: waiting for machine to come up
	I1209 22:49:57.664618   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:57.665031   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:57.665060   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:57.664986   37166 retry.go:31] will retry after 1.512445541s: waiting for machine to come up
	I1209 22:49:59.179536   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:49:59.179914   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:49:59.179939   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:49:59.179864   37166 retry.go:31] will retry after 2.399970974s: waiting for machine to come up
	I1209 22:50:01.582227   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:01.582662   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:50:01.582690   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:50:01.582599   37166 retry.go:31] will retry after 2.728474301s: waiting for machine to come up
	I1209 22:50:04.312490   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:04.312880   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:50:04.312905   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:50:04.312847   37166 retry.go:31] will retry after 4.276505546s: waiting for machine to come up
	I1209 22:50:08.590485   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:08.590927   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find current IP address of domain ha-920193-m02 in network mk-ha-920193
	I1209 22:50:08.590949   36778 main.go:141] libmachine: (ha-920193-m02) DBG | I1209 22:50:08.590889   37166 retry.go:31] will retry after 4.29966265s: waiting for machine to come up
	I1209 22:50:12.892743   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:12.893228   36778 main.go:141] libmachine: (ha-920193-m02) Found IP for machine: 192.168.39.43
	I1209 22:50:12.893253   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has current primary IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:12.893261   36778 main.go:141] libmachine: (ha-920193-m02) Reserving static IP address...
	I1209 22:50:12.893598   36778 main.go:141] libmachine: (ha-920193-m02) DBG | unable to find host DHCP lease matching {name: "ha-920193-m02", mac: "52:54:00:e3:b9:61", ip: "192.168.39.43"} in network mk-ha-920193
	I1209 22:50:12.967208   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Getting to WaitForSSH function...
	I1209 22:50:12.967241   36778 main.go:141] libmachine: (ha-920193-m02) Reserved static IP address: 192.168.39.43
	I1209 22:50:12.967255   36778 main.go:141] libmachine: (ha-920193-m02) Waiting for SSH to be available...
	I1209 22:50:12.969615   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:12.969971   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:12.969998   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:12.970158   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Using SSH client type: external
	I1209 22:50:12.970180   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa (-rw-------)
	I1209 22:50:12.970211   36778 main.go:141] libmachine: (ha-920193-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 22:50:12.970224   36778 main.go:141] libmachine: (ha-920193-m02) DBG | About to run SSH command:
	I1209 22:50:12.970270   36778 main.go:141] libmachine: (ha-920193-m02) DBG | exit 0
	I1209 22:50:13.099696   36778 main.go:141] libmachine: (ha-920193-m02) DBG | SSH cmd err, output: <nil>: 
	I1209 22:50:13.100005   36778 main.go:141] libmachine: (ha-920193-m02) KVM machine creation complete!
	I1209 22:50:13.100244   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetConfigRaw
	I1209 22:50:13.100810   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:13.100988   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:13.101128   36778 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 22:50:13.101154   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetState
	I1209 22:50:13.102588   36778 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 22:50:13.102600   36778 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 22:50:13.102605   36778 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 22:50:13.102611   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.105041   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.105398   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.105421   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.105634   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.105791   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.105931   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.106034   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.106172   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.106381   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.106392   36778 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 22:50:13.214686   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:50:13.214707   36778 main.go:141] libmachine: Detecting the provisioner...
	I1209 22:50:13.214714   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.217518   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.217915   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.217939   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.218093   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.218249   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.218422   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.218594   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.218762   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.218925   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.218936   36778 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 22:50:13.328344   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 22:50:13.328426   36778 main.go:141] libmachine: found compatible host: buildroot
	I1209 22:50:13.328436   36778 main.go:141] libmachine: Provisioning with buildroot...
	I1209 22:50:13.328445   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetMachineName
	I1209 22:50:13.328699   36778 buildroot.go:166] provisioning hostname "ha-920193-m02"
	I1209 22:50:13.328724   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetMachineName
	I1209 22:50:13.328916   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.331720   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.332124   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.332160   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.332317   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.332518   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.332696   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.332887   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.333073   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.333230   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.333241   36778 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-920193-m02 && echo "ha-920193-m02" | sudo tee /etc/hostname
	I1209 22:50:13.453959   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-920193-m02
	
	I1209 22:50:13.453993   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.457007   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.457414   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.457445   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.457635   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.457816   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.457961   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.458096   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.458282   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.458465   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.458486   36778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-920193-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-920193-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-920193-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 22:50:13.575704   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:50:13.575734   36778 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 22:50:13.575756   36778 buildroot.go:174] setting up certificates
	I1209 22:50:13.575768   36778 provision.go:84] configureAuth start
	I1209 22:50:13.575777   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetMachineName
	I1209 22:50:13.576037   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetIP
	I1209 22:50:13.578662   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.579132   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.579159   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.579337   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.581290   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.581592   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.581613   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.581740   36778 provision.go:143] copyHostCerts
	I1209 22:50:13.581770   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:50:13.581820   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 22:50:13.581832   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:50:13.581924   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 22:50:13.582006   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:50:13.582026   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 22:50:13.582033   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:50:13.582058   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 22:50:13.582102   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:50:13.582122   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 22:50:13.582131   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:50:13.582166   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 22:50:13.582231   36778 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.ha-920193-m02 san=[127.0.0.1 192.168.39.43 ha-920193-m02 localhost minikube]
	I1209 22:50:13.756786   36778 provision.go:177] copyRemoteCerts
	I1209 22:50:13.756844   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 22:50:13.756875   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.759281   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.759620   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.759646   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.759868   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.760043   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.760166   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.760302   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa Username:docker}
	I1209 22:50:13.842746   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 22:50:13.842829   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 22:50:13.868488   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 22:50:13.868558   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 22:50:13.894237   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 22:50:13.894300   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 22:50:13.919207   36778 provision.go:87] duration metric: took 343.427038ms to configureAuth
	I1209 22:50:13.919237   36778 buildroot.go:189] setting minikube options for container-runtime
	I1209 22:50:13.919436   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:50:13.919529   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:13.922321   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.922667   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:13.922689   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:13.922943   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:13.923101   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.923227   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:13.923381   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:13.923527   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:13.923766   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:13.923783   36778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 22:50:14.145275   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 22:50:14.145304   36778 main.go:141] libmachine: Checking connection to Docker...
	I1209 22:50:14.145313   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetURL
	I1209 22:50:14.146583   36778 main.go:141] libmachine: (ha-920193-m02) DBG | Using libvirt version 6000000
	I1209 22:50:14.148809   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.149140   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.149168   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.149302   36778 main.go:141] libmachine: Docker is up and running!
	I1209 22:50:14.149316   36778 main.go:141] libmachine: Reticulating splines...
	I1209 22:50:14.149322   36778 client.go:171] duration metric: took 23.856827848s to LocalClient.Create
	I1209 22:50:14.149351   36778 start.go:167] duration metric: took 23.856891761s to libmachine.API.Create "ha-920193"
	I1209 22:50:14.149370   36778 start.go:293] postStartSetup for "ha-920193-m02" (driver="kvm2")
	I1209 22:50:14.149387   36778 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 22:50:14.149412   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.149683   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 22:50:14.149706   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:14.152301   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.152593   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.152623   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.152758   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:14.152950   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.153102   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:14.153238   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa Username:docker}
	I1209 22:50:14.237586   36778 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 22:50:14.241320   36778 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 22:50:14.241353   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 22:50:14.241430   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 22:50:14.241512   36778 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 22:50:14.241522   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /etc/ssl/certs/262532.pem
	I1209 22:50:14.241599   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 22:50:14.250940   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:50:14.273559   36778 start.go:296] duration metric: took 124.171367ms for postStartSetup
	I1209 22:50:14.273622   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetConfigRaw
	I1209 22:50:14.274207   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetIP
	I1209 22:50:14.276819   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.277127   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.277156   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.277340   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:50:14.277540   36778 start.go:128] duration metric: took 24.003111268s to createHost
	I1209 22:50:14.277563   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:14.279937   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.280232   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.280257   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.280382   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:14.280557   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.280726   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.280910   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:14.281099   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:50:14.281291   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1209 22:50:14.281304   36778 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 22:50:14.388152   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733784614.364424625
	
	I1209 22:50:14.388174   36778 fix.go:216] guest clock: 1733784614.364424625
	I1209 22:50:14.388181   36778 fix.go:229] Guest: 2024-12-09 22:50:14.364424625 +0000 UTC Remote: 2024-12-09 22:50:14.27755238 +0000 UTC m=+71.170238927 (delta=86.872245ms)
	I1209 22:50:14.388195   36778 fix.go:200] guest clock delta is within tolerance: 86.872245ms
	I1209 22:50:14.388200   36778 start.go:83] releasing machines lock for "ha-920193-m02", held for 24.113917393s
	I1209 22:50:14.388222   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.388471   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetIP
	I1209 22:50:14.391084   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.391432   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.391458   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.393935   36778 out.go:177] * Found network options:
	I1209 22:50:14.395356   36778 out.go:177]   - NO_PROXY=192.168.39.102
	W1209 22:50:14.396713   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 22:50:14.396769   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.397373   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.397558   36778 main.go:141] libmachine: (ha-920193-m02) Calling .DriverName
	I1209 22:50:14.397653   36778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 22:50:14.397697   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	W1209 22:50:14.397767   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 22:50:14.397855   36778 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 22:50:14.397879   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHHostname
	I1209 22:50:14.400330   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.400563   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.400725   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.400755   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.400909   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:14.400944   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:14.400970   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:14.401106   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.401188   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHPort
	I1209 22:50:14.401275   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:14.401373   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHKeyPath
	I1209 22:50:14.401443   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa Username:docker}
	I1209 22:50:14.401504   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetSSHUsername
	I1209 22:50:14.401614   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m02/id_rsa Username:docker}
	I1209 22:50:14.637188   36778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 22:50:14.643200   36778 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 22:50:14.643281   36778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 22:50:14.659398   36778 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 22:50:14.659426   36778 start.go:495] detecting cgroup driver to use...
	I1209 22:50:14.659491   36778 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 22:50:14.676247   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 22:50:14.690114   36778 docker.go:217] disabling cri-docker service (if available) ...
	I1209 22:50:14.690183   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 22:50:14.704181   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 22:50:14.718407   36778 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 22:50:14.836265   36778 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 22:50:14.977440   36778 docker.go:233] disabling docker service ...
	I1209 22:50:14.977523   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 22:50:14.992218   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 22:50:15.006032   36778 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 22:50:15.132938   36778 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 22:50:15.246587   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 22:50:15.260594   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 22:50:15.278081   36778 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 22:50:15.278144   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.288215   36778 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 22:50:15.288291   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.298722   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.309333   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.319278   36778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 22:50:15.329514   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.339686   36778 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.356544   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:50:15.367167   36778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 22:50:15.376313   36778 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 22:50:15.376368   36778 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 22:50:15.389607   36778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 22:50:15.399026   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:50:15.510388   36778 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 22:50:15.594142   36778 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 22:50:15.594209   36778 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 22:50:15.598620   36778 start.go:563] Will wait 60s for crictl version
	I1209 22:50:15.598673   36778 ssh_runner.go:195] Run: which crictl
	I1209 22:50:15.602047   36778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 22:50:15.640250   36778 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 22:50:15.640331   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:50:15.667027   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:50:15.696782   36778 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 22:50:15.698100   36778 out.go:177]   - env NO_PROXY=192.168.39.102
	I1209 22:50:15.699295   36778 main.go:141] libmachine: (ha-920193-m02) Calling .GetIP
	I1209 22:50:15.701971   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:15.702367   36778 main.go:141] libmachine: (ha-920193-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:b9:61", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:50:04 +0000 UTC Type:0 Mac:52:54:00:e3:b9:61 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-920193-m02 Clientid:01:52:54:00:e3:b9:61}
	I1209 22:50:15.702391   36778 main.go:141] libmachine: (ha-920193-m02) DBG | domain ha-920193-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:e3:b9:61 in network mk-ha-920193
	I1209 22:50:15.702593   36778 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 22:50:15.706559   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:50:15.719413   36778 mustload.go:65] Loading cluster: ha-920193
	I1209 22:50:15.719679   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:50:15.720045   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:50:15.720080   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:50:15.735359   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34745
	I1209 22:50:15.735806   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:50:15.736258   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:50:15.736277   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:50:15.736597   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:50:15.736809   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:50:15.738383   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:50:15.738784   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:50:15.738819   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:50:15.754087   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34025
	I1209 22:50:15.754545   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:50:15.755016   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:50:15.755039   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:50:15.755363   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:50:15.755658   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:50:15.755811   36778 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193 for IP: 192.168.39.43
	I1209 22:50:15.755825   36778 certs.go:194] generating shared ca certs ...
	I1209 22:50:15.755842   36778 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:50:15.756003   36778 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 22:50:15.756062   36778 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 22:50:15.756077   36778 certs.go:256] generating profile certs ...
	I1209 22:50:15.756191   36778 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key
	I1209 22:50:15.756224   36778 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.4dc3270a
	I1209 22:50:15.756244   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.4dc3270a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.43 192.168.39.254]
	I1209 22:50:15.922567   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.4dc3270a ...
	I1209 22:50:15.922607   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.4dc3270a: {Name:mkdd9b3ceabde3bba17fb86e02452182c7c5df88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:50:15.922833   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.4dc3270a ...
	I1209 22:50:15.922852   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.4dc3270a: {Name:mkf2dc6e973669b6272c7472a098255f36b1b21a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:50:15.922964   36778 certs.go:381] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.4dc3270a -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt
	I1209 22:50:15.923108   36778 certs.go:385] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.4dc3270a -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key
	I1209 22:50:15.923250   36778 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key
	I1209 22:50:15.923268   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 22:50:15.923283   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 22:50:15.923300   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 22:50:15.923315   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 22:50:15.923331   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 22:50:15.923346   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 22:50:15.923361   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 22:50:15.923376   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 22:50:15.923447   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 22:50:15.923481   36778 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 22:50:15.923492   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 22:50:15.923526   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 22:50:15.923552   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 22:50:15.923617   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 22:50:15.923669   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:50:15.923701   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /usr/share/ca-certificates/262532.pem
	I1209 22:50:15.923718   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:50:15.923736   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem -> /usr/share/ca-certificates/26253.pem
	I1209 22:50:15.923774   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:50:15.926684   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:50:15.927100   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:50:15.927132   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:50:15.927316   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:50:15.927520   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:50:15.927686   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:50:15.927817   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:50:15.995984   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1209 22:50:16.000689   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1209 22:50:16.010769   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1209 22:50:16.015461   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1209 22:50:16.025382   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1209 22:50:16.029170   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1209 22:50:16.038869   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1209 22:50:16.042928   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1209 22:50:16.052680   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1209 22:50:16.056624   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1209 22:50:16.067154   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1209 22:50:16.071136   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1209 22:50:16.081380   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 22:50:16.105907   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 22:50:16.130202   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 22:50:16.154712   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 22:50:16.178136   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1209 22:50:16.201144   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 22:50:16.223968   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 22:50:16.245967   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 22:50:16.268545   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 22:50:16.290945   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 22:50:16.313125   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 22:50:16.335026   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1209 22:50:16.350896   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1209 22:50:16.366797   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1209 22:50:16.382304   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1209 22:50:16.398151   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1209 22:50:16.413542   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1209 22:50:16.428943   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1209 22:50:16.443894   36778 ssh_runner.go:195] Run: openssl version
	I1209 22:50:16.449370   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 22:50:16.460122   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 22:50:16.464413   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 22:50:16.464474   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 22:50:16.470266   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 22:50:16.480854   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 22:50:16.491307   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 22:50:16.495420   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 22:50:16.495468   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 22:50:16.500658   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 22:50:16.511025   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 22:50:16.521204   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:50:16.525268   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:50:16.525347   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:50:16.530531   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 22:50:16.542187   36778 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 22:50:16.546109   36778 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 22:50:16.546164   36778 kubeadm.go:934] updating node {m02 192.168.39.43 8443 v1.31.2 crio true true} ...
	I1209 22:50:16.546250   36778 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-920193-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 22:50:16.546279   36778 kube-vip.go:115] generating kube-vip config ...
	I1209 22:50:16.546321   36778 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 22:50:16.565259   36778 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 22:50:16.565317   36778 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 22:50:16.565368   36778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 22:50:16.576227   36778 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1209 22:50:16.576286   36778 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1209 22:50:16.587283   36778 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1209 22:50:16.587313   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 22:50:16.587347   36778 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1209 22:50:16.587371   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 22:50:16.587429   36778 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1209 22:50:16.591406   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1209 22:50:16.591443   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1209 22:50:17.403840   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 22:50:17.403917   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 22:50:17.408515   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1209 22:50:17.408550   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1209 22:50:17.508668   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:50:17.539619   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 22:50:17.539709   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 22:50:17.547698   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1209 22:50:17.547746   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1209 22:50:17.976645   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1209 22:50:17.986050   36778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 22:50:18.001981   36778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 22:50:18.017737   36778 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 22:50:18.034382   36778 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 22:50:18.038243   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:50:18.051238   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:50:18.168167   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:50:18.185010   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:50:18.185466   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:50:18.185511   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:50:18.200608   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36771
	I1209 22:50:18.201083   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:50:18.201577   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:50:18.201599   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:50:18.201983   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:50:18.202177   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:50:18.202335   36778 start.go:317] joinCluster: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:50:18.202454   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1209 22:50:18.202478   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:50:18.205838   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:50:18.206272   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:50:18.206305   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:50:18.206454   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:50:18.206651   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:50:18.206809   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:50:18.206953   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:50:18.346102   36778 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:50:18.346151   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0e0mum.qzhjvrjwvxlgpdn7 --discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-920193-m02 --control-plane --apiserver-advertise-address=192.168.39.43 --apiserver-bind-port=8443"
	I1209 22:50:38.220755   36778 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0e0mum.qzhjvrjwvxlgpdn7 --discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-920193-m02 --control-plane --apiserver-advertise-address=192.168.39.43 --apiserver-bind-port=8443": (19.874577958s)
	I1209 22:50:38.220795   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1209 22:50:38.605694   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-920193-m02 minikube.k8s.io/updated_at=2024_12_09T22_50_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=ha-920193 minikube.k8s.io/primary=false
	I1209 22:50:38.732046   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-920193-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1209 22:50:38.853470   36778 start.go:319] duration metric: took 20.651129665s to joinCluster
	I1209 22:50:38.853557   36778 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:50:38.853987   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:50:38.855541   36778 out.go:177] * Verifying Kubernetes components...
	I1209 22:50:38.856758   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:50:39.134622   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:50:39.155772   36778 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:50:39.156095   36778 kapi.go:59] client config for ha-920193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key", CAFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c9a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1209 22:50:39.156174   36778 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I1209 22:50:39.156458   36778 node_ready.go:35] waiting up to 6m0s for node "ha-920193-m02" to be "Ready" ...
	I1209 22:50:39.156557   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:39.156569   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:39.156580   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:39.156589   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:39.166040   36778 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1209 22:50:39.656808   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:39.656835   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:39.656848   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:39.656853   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:39.660666   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:40.157282   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:40.157306   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:40.157314   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:40.157319   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:40.171594   36778 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1209 22:50:40.656953   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:40.656975   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:40.656984   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:40.656988   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:40.660321   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:41.157246   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:41.157267   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:41.157275   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:41.157278   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:41.160595   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:41.161242   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:41.657713   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:41.657743   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:41.657754   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:41.657760   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:41.661036   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:42.157055   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:42.157081   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:42.157092   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:42.157098   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:42.160406   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:42.657502   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:42.657525   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:42.657535   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:42.657543   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:42.660437   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:43.157580   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:43.157601   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:43.157610   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:43.157614   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:43.159874   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:43.657603   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:43.657624   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:43.657631   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:43.657635   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:43.661418   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:43.662212   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:44.157154   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:44.157180   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:44.157193   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:44.157199   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:44.160641   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:44.657594   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:44.657632   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:44.657639   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:44.657643   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:44.660444   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:45.156643   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:45.156665   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:45.156673   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:45.156678   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:45.159591   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:45.656824   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:45.656848   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:45.656860   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:45.656865   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:45.660567   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:46.157410   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:46.157431   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:46.157440   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:46.157444   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:46.164952   36778 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1209 22:50:46.165425   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:46.656667   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:46.656688   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:46.656695   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:46.656701   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:46.660336   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:47.157296   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:47.157321   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:47.157329   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:47.157332   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:47.160332   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:47.657301   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:47.657323   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:47.657331   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:47.657336   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:47.660325   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:48.157563   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:48.157584   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:48.157594   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:48.157608   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:48.160951   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:48.657246   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:48.657273   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:48.657284   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:48.657292   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:48.660393   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:48.661028   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:49.157387   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:49.157407   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:49.157413   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:49.157418   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:49.160553   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:49.656857   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:49.656876   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:49.656884   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:49.656887   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:49.660150   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:50.157105   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:50.157127   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:50.157135   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:50.157138   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:50.160132   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:50.657157   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:50.657175   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:50.657183   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:50.657186   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:50.660060   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:51.156681   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:51.156703   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:51.156710   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:51.156715   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:51.160061   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:51.160485   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:51.656792   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:51.656814   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:51.656822   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:51.656828   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:51.660462   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:52.157422   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:52.157444   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:52.157452   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:52.157456   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:52.160620   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:52.657587   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:52.657612   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:52.657623   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:52.657635   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:52.661805   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:50:53.156794   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:53.156813   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:53.156820   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:53.156824   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:53.159611   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:53.657422   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:53.657443   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:53.657451   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:53.657456   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:53.660973   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:53.661490   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:54.156741   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:54.156775   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:54.156788   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:54.156793   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:54.159842   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:54.657520   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:54.657542   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:54.657551   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:54.657556   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:54.661360   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:55.157356   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:55.157381   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:55.157389   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:55.157398   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:55.160974   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:55.657357   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:55.657380   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:55.657386   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:55.657389   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:55.661109   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:55.661633   36778 node_ready.go:53] node "ha-920193-m02" has status "Ready":"False"
	I1209 22:50:56.156805   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:56.156829   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:56.156842   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:56.156848   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:56.159652   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:56.657355   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:56.657382   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:56.657391   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:56.657396   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:56.660284   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.156798   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:57.156817   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.156825   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.156828   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.159439   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.160184   36778 node_ready.go:49] node "ha-920193-m02" has status "Ready":"True"
	I1209 22:50:57.160211   36778 node_ready.go:38] duration metric: took 18.003728094s for node "ha-920193-m02" to be "Ready" ...
	I1209 22:50:57.160219   36778 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:50:57.160281   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:50:57.160291   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.160297   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.160301   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.163826   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.171109   36778 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.171198   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9792g
	I1209 22:50:57.171207   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.171215   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.171218   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.175686   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:50:57.176418   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.176433   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.176440   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.176445   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.178918   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.179482   36778 pod_ready.go:93] pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.179502   36778 pod_ready.go:82] duration metric: took 8.366716ms for pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.179511   36778 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.179579   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pftgv
	I1209 22:50:57.179590   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.179601   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.179607   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.181884   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.182566   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.182584   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.182593   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.182603   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.184849   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.185336   36778 pod_ready.go:93] pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.185356   36778 pod_ready.go:82] duration metric: took 5.835616ms for pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.185369   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.185431   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193
	I1209 22:50:57.185440   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.185446   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.185452   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.187419   36778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1209 22:50:57.188120   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.188138   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.188148   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.188155   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.190287   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.190719   36778 pod_ready.go:93] pod "etcd-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.190736   36778 pod_ready.go:82] duration metric: took 5.359942ms for pod "etcd-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.190748   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.190809   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193-m02
	I1209 22:50:57.190819   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.190828   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.190835   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.192882   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.193624   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:57.193638   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.193645   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.193648   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.195725   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:57.196308   36778 pod_ready.go:93] pod "etcd-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.196330   36778 pod_ready.go:82] duration metric: took 5.570375ms for pod "etcd-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.196346   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.357701   36778 request.go:632] Waited for 161.300261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193
	I1209 22:50:57.357803   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193
	I1209 22:50:57.357815   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.357826   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.357831   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.361143   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.557163   36778 request.go:632] Waited for 195.392304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.557255   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:57.557275   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.557286   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.557299   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.560687   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.561270   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.561292   36778 pod_ready.go:82] duration metric: took 364.939583ms for pod "kube-apiserver-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.561303   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.757400   36778 request.go:632] Waited for 196.034135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m02
	I1209 22:50:57.757501   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m02
	I1209 22:50:57.757514   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.757525   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.757533   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.761021   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.957152   36778 request.go:632] Waited for 195.395123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:57.957252   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:57.957262   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:57.957269   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:57.957273   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:57.961000   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:57.961523   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:57.961541   36778 pod_ready.go:82] duration metric: took 400.228352ms for pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:57.961551   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.157823   36778 request.go:632] Waited for 196.207607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193
	I1209 22:50:58.157936   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193
	I1209 22:50:58.157948   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.157956   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.157960   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.161121   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:58.357017   36778 request.go:632] Waited for 194.771557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:58.357073   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:58.357091   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.357099   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.357103   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.360088   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:58.360518   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:58.360541   36778 pod_ready.go:82] duration metric: took 398.983882ms for pod "kube-controller-manager-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.360551   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.557689   36778 request.go:632] Waited for 197.047701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m02
	I1209 22:50:58.557763   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m02
	I1209 22:50:58.557772   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.557779   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.557783   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.561314   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:58.757454   36778 request.go:632] Waited for 195.361025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:58.757514   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:58.757519   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.757531   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.757540   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.760353   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:58.760931   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:58.760952   36778 pod_ready.go:82] duration metric: took 400.394843ms for pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.760961   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lntbt" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:58.956933   36778 request.go:632] Waited for 195.877051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lntbt
	I1209 22:50:58.956989   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lntbt
	I1209 22:50:58.956993   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:58.957001   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:58.957005   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:58.960313   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:59.157481   36778 request.go:632] Waited for 196.370711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:59.157545   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:50:59.157551   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.157558   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.157562   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.160790   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:59.161308   36778 pod_ready.go:93] pod "kube-proxy-lntbt" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:59.161325   36778 pod_ready.go:82] duration metric: took 400.358082ms for pod "kube-proxy-lntbt" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.161334   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r8nhm" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.357539   36778 request.go:632] Waited for 196.144123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r8nhm
	I1209 22:50:59.357600   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r8nhm
	I1209 22:50:59.357605   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.357614   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.357619   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.360709   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:59.557525   36778 request.go:632] Waited for 196.134266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:59.557582   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:59.557587   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.557594   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.557599   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.561037   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:50:59.561650   36778 pod_ready.go:93] pod "kube-proxy-r8nhm" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:59.561671   36778 pod_ready.go:82] duration metric: took 400.330133ms for pod "kube-proxy-r8nhm" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.561686   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.757716   36778 request.go:632] Waited for 195.957167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193
	I1209 22:50:59.757794   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193
	I1209 22:50:59.757799   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.757806   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.757810   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.760629   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:59.957516   36778 request.go:632] Waited for 196.356707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:59.957571   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:50:59.957576   36778 round_trippers.go:469] Request Headers:
	I1209 22:50:59.957583   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:50:59.957589   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:50:59.960569   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:50:59.961033   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:50:59.961052   36778 pod_ready.go:82] duration metric: took 399.355328ms for pod "kube-scheduler-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:50:59.961065   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:51:00.157215   36778 request.go:632] Waited for 196.068129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m02
	I1209 22:51:00.157354   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m02
	I1209 22:51:00.157371   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.157385   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.157393   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.160825   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:00.357607   36778 request.go:632] Waited for 196.256861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:51:00.357660   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:51:00.357665   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.357673   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.357676   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.360928   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:00.361370   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:51:00.361388   36778 pod_ready.go:82] duration metric: took 400.315143ms for pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:51:00.361398   36778 pod_ready.go:39] duration metric: took 3.201168669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:51:00.361416   36778 api_server.go:52] waiting for apiserver process to appear ...
	I1209 22:51:00.361461   36778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 22:51:00.375321   36778 api_server.go:72] duration metric: took 21.521720453s to wait for apiserver process to appear ...
	I1209 22:51:00.375346   36778 api_server.go:88] waiting for apiserver healthz status ...
	I1209 22:51:00.375364   36778 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I1209 22:51:00.379577   36778 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I1209 22:51:00.379640   36778 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I1209 22:51:00.379648   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.379656   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.379662   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.380589   36778 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1209 22:51:00.380716   36778 api_server.go:141] control plane version: v1.31.2
	I1209 22:51:00.380756   36778 api_server.go:131] duration metric: took 5.402425ms to wait for apiserver health ...
	I1209 22:51:00.380766   36778 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 22:51:00.557205   36778 request.go:632] Waited for 176.35448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:51:00.557271   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:51:00.557277   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.557284   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.557289   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.561926   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:00.568583   36778 system_pods.go:59] 17 kube-system pods found
	I1209 22:51:00.568619   36778 system_pods.go:61] "coredns-7c65d6cfc9-9792g" [61c6326d-4d13-49b7-9fae-c4f8906c3c7e] Running
	I1209 22:51:00.568631   36778 system_pods.go:61] "coredns-7c65d6cfc9-pftgv" [eb45cffe-b666-435b-ada3-605735ee43c1] Running
	I1209 22:51:00.568637   36778 system_pods.go:61] "etcd-ha-920193" [70c28c05-bf86-4a56-9863-0e836de39230] Running
	I1209 22:51:00.568643   36778 system_pods.go:61] "etcd-ha-920193-m02" [b1746065-266b-487c-8aa5-629d5b421a3b] Running
	I1209 22:51:00.568648   36778 system_pods.go:61] "kindnet-7bbbc" [76f14f4c-3759-45b3-8161-78d85c44b5a6] Running
	I1209 22:51:00.568652   36778 system_pods.go:61] "kindnet-rcctv" [e9580e78-cfcb-45b6-9e90-f37ce3881c6b] Running
	I1209 22:51:00.568657   36778 system_pods.go:61] "kube-apiserver-ha-920193" [9c15e33e-f7c1-4b0b-84de-718043e0ea1b] Running
	I1209 22:51:00.568662   36778 system_pods.go:61] "kube-apiserver-ha-920193-m02" [d05fff94-7b3f-4127-8164-3c1a838bb71c] Running
	I1209 22:51:00.568672   36778 system_pods.go:61] "kube-controller-manager-ha-920193" [4a411c41-cba8-4cb3-beec-fcd3f9a8ade1] Running
	I1209 22:51:00.568677   36778 system_pods.go:61] "kube-controller-manager-ha-920193-m02" [57181858-4c99-4afa-b94c-4ba320fc293d] Running
	I1209 22:51:00.568681   36778 system_pods.go:61] "kube-proxy-lntbt" [8eb5538d-7ac0-4949-b7f9-29d5530f4521] Running
	I1209 22:51:00.568687   36778 system_pods.go:61] "kube-proxy-r8nhm" [355570de-1c30-4aa2-a56c-a06639cc339c] Running
	I1209 22:51:00.568692   36778 system_pods.go:61] "kube-scheduler-ha-920193" [60b5be1e-f9fe-4c30-b435-7d321c484bc4] Running
	I1209 22:51:00.568699   36778 system_pods.go:61] "kube-scheduler-ha-920193-m02" [a35e79b6-6cf2-4669-95fb-c1f358c811d2] Running
	I1209 22:51:00.568703   36778 system_pods.go:61] "kube-vip-ha-920193" [4f238d3a-6219-4e4d-89a8-22f73def8440] Running
	I1209 22:51:00.568709   36778 system_pods.go:61] "kube-vip-ha-920193-m02" [0fe8360d-b418-47dd-bcce-a8d5a6c4c29d] Running
	I1209 22:51:00.568713   36778 system_pods.go:61] "storage-provisioner" [cb31984d-7602-406c-8c77-ce8571cdaa52] Running
	I1209 22:51:00.568720   36778 system_pods.go:74] duration metric: took 187.947853ms to wait for pod list to return data ...
	I1209 22:51:00.568736   36778 default_sa.go:34] waiting for default service account to be created ...
	I1209 22:51:00.757459   36778 request.go:632] Waited for 188.649373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1209 22:51:00.757529   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1209 22:51:00.757535   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.757542   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.757549   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.761133   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:00.761462   36778 default_sa.go:45] found service account: "default"
	I1209 22:51:00.761484   36778 default_sa.go:55] duration metric: took 192.741843ms for default service account to be created ...
	I1209 22:51:00.761493   36778 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 22:51:00.957815   36778 request.go:632] Waited for 196.251364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:51:00.957869   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:51:00.957874   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:00.957881   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:00.957886   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:00.962434   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:00.967784   36778 system_pods.go:86] 17 kube-system pods found
	I1209 22:51:00.967807   36778 system_pods.go:89] "coredns-7c65d6cfc9-9792g" [61c6326d-4d13-49b7-9fae-c4f8906c3c7e] Running
	I1209 22:51:00.967813   36778 system_pods.go:89] "coredns-7c65d6cfc9-pftgv" [eb45cffe-b666-435b-ada3-605735ee43c1] Running
	I1209 22:51:00.967818   36778 system_pods.go:89] "etcd-ha-920193" [70c28c05-bf86-4a56-9863-0e836de39230] Running
	I1209 22:51:00.967822   36778 system_pods.go:89] "etcd-ha-920193-m02" [b1746065-266b-487c-8aa5-629d5b421a3b] Running
	I1209 22:51:00.967825   36778 system_pods.go:89] "kindnet-7bbbc" [76f14f4c-3759-45b3-8161-78d85c44b5a6] Running
	I1209 22:51:00.967829   36778 system_pods.go:89] "kindnet-rcctv" [e9580e78-cfcb-45b6-9e90-f37ce3881c6b] Running
	I1209 22:51:00.967832   36778 system_pods.go:89] "kube-apiserver-ha-920193" [9c15e33e-f7c1-4b0b-84de-718043e0ea1b] Running
	I1209 22:51:00.967836   36778 system_pods.go:89] "kube-apiserver-ha-920193-m02" [d05fff94-7b3f-4127-8164-3c1a838bb71c] Running
	I1209 22:51:00.967839   36778 system_pods.go:89] "kube-controller-manager-ha-920193" [4a411c41-cba8-4cb3-beec-fcd3f9a8ade1] Running
	I1209 22:51:00.967843   36778 system_pods.go:89] "kube-controller-manager-ha-920193-m02" [57181858-4c99-4afa-b94c-4ba320fc293d] Running
	I1209 22:51:00.967846   36778 system_pods.go:89] "kube-proxy-lntbt" [8eb5538d-7ac0-4949-b7f9-29d5530f4521] Running
	I1209 22:51:00.967849   36778 system_pods.go:89] "kube-proxy-r8nhm" [355570de-1c30-4aa2-a56c-a06639cc339c] Running
	I1209 22:51:00.967853   36778 system_pods.go:89] "kube-scheduler-ha-920193" [60b5be1e-f9fe-4c30-b435-7d321c484bc4] Running
	I1209 22:51:00.967856   36778 system_pods.go:89] "kube-scheduler-ha-920193-m02" [a35e79b6-6cf2-4669-95fb-c1f358c811d2] Running
	I1209 22:51:00.967859   36778 system_pods.go:89] "kube-vip-ha-920193" [4f238d3a-6219-4e4d-89a8-22f73def8440] Running
	I1209 22:51:00.967862   36778 system_pods.go:89] "kube-vip-ha-920193-m02" [0fe8360d-b418-47dd-bcce-a8d5a6c4c29d] Running
	I1209 22:51:00.967865   36778 system_pods.go:89] "storage-provisioner" [cb31984d-7602-406c-8c77-ce8571cdaa52] Running
	I1209 22:51:00.967872   36778 system_pods.go:126] duration metric: took 206.369849ms to wait for k8s-apps to be running ...
	I1209 22:51:00.967881   36778 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 22:51:00.967920   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:51:00.982635   36778 system_svc.go:56] duration metric: took 14.746001ms WaitForService to wait for kubelet
	I1209 22:51:00.982658   36778 kubeadm.go:582] duration metric: took 22.129061399s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 22:51:00.982676   36778 node_conditions.go:102] verifying NodePressure condition ...
	I1209 22:51:01.157065   36778 request.go:632] Waited for 174.324712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I1209 22:51:01.157132   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I1209 22:51:01.157137   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:01.157146   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:01.157150   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:01.161631   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:01.162406   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:51:01.162427   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:51:01.162443   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:51:01.162449   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:51:01.162454   36778 node_conditions.go:105] duration metric: took 179.774021ms to run NodePressure ...
	I1209 22:51:01.162470   36778 start.go:241] waiting for startup goroutines ...
	I1209 22:51:01.162500   36778 start.go:255] writing updated cluster config ...
	I1209 22:51:01.164529   36778 out.go:201] 
	I1209 22:51:01.165967   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:51:01.166048   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:51:01.167621   36778 out.go:177] * Starting "ha-920193-m03" control-plane node in "ha-920193" cluster
	I1209 22:51:01.168868   36778 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:51:01.168885   36778 cache.go:56] Caching tarball of preloaded images
	I1209 22:51:01.168992   36778 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 22:51:01.169010   36778 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 22:51:01.169110   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:51:01.169269   36778 start.go:360] acquireMachinesLock for ha-920193-m03: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 22:51:01.169312   36778 start.go:364] duration metric: took 23.987µs to acquireMachinesLock for "ha-920193-m03"
	I1209 22:51:01.169336   36778 start.go:93] Provisioning new machine with config: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:51:01.169433   36778 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1209 22:51:01.171416   36778 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 22:51:01.171522   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:51:01.171583   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:51:01.186366   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42027
	I1209 22:51:01.186874   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:51:01.187404   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:51:01.187428   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:51:01.187781   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:51:01.187979   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetMachineName
	I1209 22:51:01.188140   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:01.188306   36778 start.go:159] libmachine.API.Create for "ha-920193" (driver="kvm2")
	I1209 22:51:01.188339   36778 client.go:168] LocalClient.Create starting
	I1209 22:51:01.188376   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem
	I1209 22:51:01.188415   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:51:01.188430   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:51:01.188479   36778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem
	I1209 22:51:01.188497   36778 main.go:141] libmachine: Decoding PEM data...
	I1209 22:51:01.188505   36778 main.go:141] libmachine: Parsing certificate...
	I1209 22:51:01.188519   36778 main.go:141] libmachine: Running pre-create checks...
	I1209 22:51:01.188524   36778 main.go:141] libmachine: (ha-920193-m03) Calling .PreCreateCheck
	I1209 22:51:01.188706   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetConfigRaw
	I1209 22:51:01.189120   36778 main.go:141] libmachine: Creating machine...
	I1209 22:51:01.189133   36778 main.go:141] libmachine: (ha-920193-m03) Calling .Create
	I1209 22:51:01.189263   36778 main.go:141] libmachine: (ha-920193-m03) Creating KVM machine...
	I1209 22:51:01.190619   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found existing default KVM network
	I1209 22:51:01.190780   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found existing private KVM network mk-ha-920193
	I1209 22:51:01.190893   36778 main.go:141] libmachine: (ha-920193-m03) Setting up store path in /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03 ...
	I1209 22:51:01.190907   36778 main.go:141] libmachine: (ha-920193-m03) Building disk image from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 22:51:01.191000   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:01.190898   37541 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:51:01.191087   36778 main.go:141] libmachine: (ha-920193-m03) Downloading /home/jenkins/minikube-integration/19888-18950/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 22:51:01.428399   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:01.428270   37541 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa...
	I1209 22:51:01.739906   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:01.739799   37541 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/ha-920193-m03.rawdisk...
	I1209 22:51:01.739933   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Writing magic tar header
	I1209 22:51:01.739943   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Writing SSH key tar header
	I1209 22:51:01.739951   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:01.739915   37541 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03 ...
	I1209 22:51:01.740035   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03
	I1209 22:51:01.740064   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03 (perms=drwx------)
	I1209 22:51:01.740080   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines
	I1209 22:51:01.740097   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:51:01.740107   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950
	I1209 22:51:01.740114   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines (perms=drwxr-xr-x)
	I1209 22:51:01.740127   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube (perms=drwxr-xr-x)
	I1209 22:51:01.740140   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950 (perms=drwxrwxr-x)
	I1209 22:51:01.740154   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 22:51:01.740167   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 22:51:01.740178   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home/jenkins
	I1209 22:51:01.740189   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Checking permissions on dir: /home
	I1209 22:51:01.740219   36778 main.go:141] libmachine: (ha-920193-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 22:51:01.740244   36778 main.go:141] libmachine: (ha-920193-m03) Creating domain...
	I1209 22:51:01.740252   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Skipping /home - not owner
	I1209 22:51:01.741067   36778 main.go:141] libmachine: (ha-920193-m03) define libvirt domain using xml: 
	I1209 22:51:01.741086   36778 main.go:141] libmachine: (ha-920193-m03) <domain type='kvm'>
	I1209 22:51:01.741093   36778 main.go:141] libmachine: (ha-920193-m03)   <name>ha-920193-m03</name>
	I1209 22:51:01.741098   36778 main.go:141] libmachine: (ha-920193-m03)   <memory unit='MiB'>2200</memory>
	I1209 22:51:01.741103   36778 main.go:141] libmachine: (ha-920193-m03)   <vcpu>2</vcpu>
	I1209 22:51:01.741110   36778 main.go:141] libmachine: (ha-920193-m03)   <features>
	I1209 22:51:01.741115   36778 main.go:141] libmachine: (ha-920193-m03)     <acpi/>
	I1209 22:51:01.741119   36778 main.go:141] libmachine: (ha-920193-m03)     <apic/>
	I1209 22:51:01.741124   36778 main.go:141] libmachine: (ha-920193-m03)     <pae/>
	I1209 22:51:01.741128   36778 main.go:141] libmachine: (ha-920193-m03)     
	I1209 22:51:01.741133   36778 main.go:141] libmachine: (ha-920193-m03)   </features>
	I1209 22:51:01.741147   36778 main.go:141] libmachine: (ha-920193-m03)   <cpu mode='host-passthrough'>
	I1209 22:51:01.741152   36778 main.go:141] libmachine: (ha-920193-m03)   
	I1209 22:51:01.741157   36778 main.go:141] libmachine: (ha-920193-m03)   </cpu>
	I1209 22:51:01.741162   36778 main.go:141] libmachine: (ha-920193-m03)   <os>
	I1209 22:51:01.741166   36778 main.go:141] libmachine: (ha-920193-m03)     <type>hvm</type>
	I1209 22:51:01.741171   36778 main.go:141] libmachine: (ha-920193-m03)     <boot dev='cdrom'/>
	I1209 22:51:01.741176   36778 main.go:141] libmachine: (ha-920193-m03)     <boot dev='hd'/>
	I1209 22:51:01.741184   36778 main.go:141] libmachine: (ha-920193-m03)     <bootmenu enable='no'/>
	I1209 22:51:01.741188   36778 main.go:141] libmachine: (ha-920193-m03)   </os>
	I1209 22:51:01.741225   36778 main.go:141] libmachine: (ha-920193-m03)   <devices>
	I1209 22:51:01.741245   36778 main.go:141] libmachine: (ha-920193-m03)     <disk type='file' device='cdrom'>
	I1209 22:51:01.741288   36778 main.go:141] libmachine: (ha-920193-m03)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/boot2docker.iso'/>
	I1209 22:51:01.741325   36778 main.go:141] libmachine: (ha-920193-m03)       <target dev='hdc' bus='scsi'/>
	I1209 22:51:01.741339   36778 main.go:141] libmachine: (ha-920193-m03)       <readonly/>
	I1209 22:51:01.741350   36778 main.go:141] libmachine: (ha-920193-m03)     </disk>
	I1209 22:51:01.741361   36778 main.go:141] libmachine: (ha-920193-m03)     <disk type='file' device='disk'>
	I1209 22:51:01.741373   36778 main.go:141] libmachine: (ha-920193-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 22:51:01.741386   36778 main.go:141] libmachine: (ha-920193-m03)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/ha-920193-m03.rawdisk'/>
	I1209 22:51:01.741397   36778 main.go:141] libmachine: (ha-920193-m03)       <target dev='hda' bus='virtio'/>
	I1209 22:51:01.741408   36778 main.go:141] libmachine: (ha-920193-m03)     </disk>
	I1209 22:51:01.741418   36778 main.go:141] libmachine: (ha-920193-m03)     <interface type='network'>
	I1209 22:51:01.741429   36778 main.go:141] libmachine: (ha-920193-m03)       <source network='mk-ha-920193'/>
	I1209 22:51:01.741437   36778 main.go:141] libmachine: (ha-920193-m03)       <model type='virtio'/>
	I1209 22:51:01.741447   36778 main.go:141] libmachine: (ha-920193-m03)     </interface>
	I1209 22:51:01.741456   36778 main.go:141] libmachine: (ha-920193-m03)     <interface type='network'>
	I1209 22:51:01.741472   36778 main.go:141] libmachine: (ha-920193-m03)       <source network='default'/>
	I1209 22:51:01.741483   36778 main.go:141] libmachine: (ha-920193-m03)       <model type='virtio'/>
	I1209 22:51:01.741496   36778 main.go:141] libmachine: (ha-920193-m03)     </interface>
	I1209 22:51:01.741507   36778 main.go:141] libmachine: (ha-920193-m03)     <serial type='pty'>
	I1209 22:51:01.741516   36778 main.go:141] libmachine: (ha-920193-m03)       <target port='0'/>
	I1209 22:51:01.741525   36778 main.go:141] libmachine: (ha-920193-m03)     </serial>
	I1209 22:51:01.741534   36778 main.go:141] libmachine: (ha-920193-m03)     <console type='pty'>
	I1209 22:51:01.741544   36778 main.go:141] libmachine: (ha-920193-m03)       <target type='serial' port='0'/>
	I1209 22:51:01.741552   36778 main.go:141] libmachine: (ha-920193-m03)     </console>
	I1209 22:51:01.741566   36778 main.go:141] libmachine: (ha-920193-m03)     <rng model='virtio'>
	I1209 22:51:01.741580   36778 main.go:141] libmachine: (ha-920193-m03)       <backend model='random'>/dev/random</backend>
	I1209 22:51:01.741590   36778 main.go:141] libmachine: (ha-920193-m03)     </rng>
	I1209 22:51:01.741597   36778 main.go:141] libmachine: (ha-920193-m03)     
	I1209 22:51:01.741606   36778 main.go:141] libmachine: (ha-920193-m03)     
	I1209 22:51:01.741616   36778 main.go:141] libmachine: (ha-920193-m03)   </devices>
	I1209 22:51:01.741623   36778 main.go:141] libmachine: (ha-920193-m03) </domain>
	I1209 22:51:01.741635   36778 main.go:141] libmachine: (ha-920193-m03) 
	I1209 22:51:01.749628   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:ca:84:fc in network default
	I1209 22:51:01.750354   36778 main.go:141] libmachine: (ha-920193-m03) Ensuring networks are active...
	I1209 22:51:01.750395   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:01.751100   36778 main.go:141] libmachine: (ha-920193-m03) Ensuring network default is active
	I1209 22:51:01.751465   36778 main.go:141] libmachine: (ha-920193-m03) Ensuring network mk-ha-920193 is active
	I1209 22:51:01.751930   36778 main.go:141] libmachine: (ha-920193-m03) Getting domain xml...
	I1209 22:51:01.752802   36778 main.go:141] libmachine: (ha-920193-m03) Creating domain...
	I1209 22:51:03.003454   36778 main.go:141] libmachine: (ha-920193-m03) Waiting to get IP...
	I1209 22:51:03.004238   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:03.004647   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:03.004670   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:03.004626   37541 retry.go:31] will retry after 297.46379ms: waiting for machine to come up
	I1209 22:51:03.304151   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:03.304627   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:03.304651   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:03.304586   37541 retry.go:31] will retry after 341.743592ms: waiting for machine to come up
	I1209 22:51:03.648185   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:03.648648   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:03.648681   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:03.648610   37541 retry.go:31] will retry after 348.703343ms: waiting for machine to come up
	I1209 22:51:03.999220   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:03.999761   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:03.999783   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:03.999722   37541 retry.go:31] will retry after 471.208269ms: waiting for machine to come up
	I1209 22:51:04.473118   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:04.473644   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:04.473698   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:04.473622   37541 retry.go:31] will retry after 567.031016ms: waiting for machine to come up
	I1209 22:51:05.042388   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:05.042845   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:05.042890   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:05.042828   37541 retry.go:31] will retry after 635.422002ms: waiting for machine to come up
	I1209 22:51:05.679729   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:05.680179   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:05.680197   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:05.680151   37541 retry.go:31] will retry after 1.009913666s: waiting for machine to come up
	I1209 22:51:06.691434   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:06.692093   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:06.692115   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:06.692049   37541 retry.go:31] will retry after 1.22911274s: waiting for machine to come up
	I1209 22:51:07.923301   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:07.923871   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:07.923895   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:07.923821   37541 retry.go:31] will retry after 1.262587003s: waiting for machine to come up
	I1209 22:51:09.187598   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:09.188051   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:09.188081   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:09.188005   37541 retry.go:31] will retry after 2.033467764s: waiting for machine to come up
	I1209 22:51:11.223284   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:11.223845   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:11.223872   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:11.223795   37541 retry.go:31] will retry after 2.889234368s: waiting for machine to come up
	I1209 22:51:14.116824   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:14.117240   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:14.117262   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:14.117201   37541 retry.go:31] will retry after 2.84022101s: waiting for machine to come up
	I1209 22:51:16.958771   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:16.959194   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:16.959219   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:16.959151   37541 retry.go:31] will retry after 3.882632517s: waiting for machine to come up
	I1209 22:51:20.846163   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:20.846626   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find current IP address of domain ha-920193-m03 in network mk-ha-920193
	I1209 22:51:20.846647   36778 main.go:141] libmachine: (ha-920193-m03) DBG | I1209 22:51:20.846582   37541 retry.go:31] will retry after 4.879681656s: waiting for machine to come up
	I1209 22:51:25.727341   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.727988   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has current primary IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.728010   36778 main.go:141] libmachine: (ha-920193-m03) Found IP for machine: 192.168.39.45
	I1209 22:51:25.728024   36778 main.go:141] libmachine: (ha-920193-m03) Reserving static IP address...
	I1209 22:51:25.728426   36778 main.go:141] libmachine: (ha-920193-m03) DBG | unable to find host DHCP lease matching {name: "ha-920193-m03", mac: "52:54:00:50:0a:7f", ip: "192.168.39.45"} in network mk-ha-920193
	I1209 22:51:25.801758   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Getting to WaitForSSH function...
	I1209 22:51:25.801788   36778 main.go:141] libmachine: (ha-920193-m03) Reserved static IP address: 192.168.39.45
	I1209 22:51:25.801801   36778 main.go:141] libmachine: (ha-920193-m03) Waiting for SSH to be available...
	I1209 22:51:25.804862   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.805259   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:minikube Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:25.805292   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.805437   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Using SSH client type: external
	I1209 22:51:25.805466   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa (-rw-------)
	I1209 22:51:25.805497   36778 main.go:141] libmachine: (ha-920193-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.45 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 22:51:25.805521   36778 main.go:141] libmachine: (ha-920193-m03) DBG | About to run SSH command:
	I1209 22:51:25.805536   36778 main.go:141] libmachine: (ha-920193-m03) DBG | exit 0
	I1209 22:51:25.927825   36778 main.go:141] libmachine: (ha-920193-m03) DBG | SSH cmd err, output: <nil>: 
	I1209 22:51:25.928111   36778 main.go:141] libmachine: (ha-920193-m03) KVM machine creation complete!
	I1209 22:51:25.928439   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetConfigRaw
	I1209 22:51:25.928948   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:25.929144   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:25.929273   36778 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 22:51:25.929318   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetState
	I1209 22:51:25.930677   36778 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 22:51:25.930689   36778 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 22:51:25.930694   36778 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 22:51:25.930702   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:25.933545   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.933940   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:25.933962   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:25.934133   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:25.934287   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:25.934450   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:25.934592   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:25.934747   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:25.934964   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:25.934979   36778 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 22:51:26.038809   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:51:26.038831   36778 main.go:141] libmachine: Detecting the provisioner...
	I1209 22:51:26.038839   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.041686   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.041976   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.042008   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.042164   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.042336   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.042474   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.042609   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.042802   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:26.042955   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:26.042966   36778 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 22:51:26.148122   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 22:51:26.148211   36778 main.go:141] libmachine: found compatible host: buildroot
	I1209 22:51:26.148225   36778 main.go:141] libmachine: Provisioning with buildroot...
	I1209 22:51:26.148236   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetMachineName
	I1209 22:51:26.148529   36778 buildroot.go:166] provisioning hostname "ha-920193-m03"
	I1209 22:51:26.148558   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetMachineName
	I1209 22:51:26.148758   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.151543   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.151998   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.152027   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.152153   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.152318   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.152485   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.152628   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.152792   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:26.152967   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:26.152984   36778 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-920193-m03 && echo "ha-920193-m03" | sudo tee /etc/hostname
	I1209 22:51:26.273873   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-920193-m03
	
	I1209 22:51:26.273909   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.276949   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.277338   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.277363   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.277530   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.277710   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.277857   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.278009   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.278182   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:26.278378   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:26.278395   36778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-920193-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-920193-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-920193-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 22:51:26.396863   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:51:26.396892   36778 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 22:51:26.396911   36778 buildroot.go:174] setting up certificates
	I1209 22:51:26.396941   36778 provision.go:84] configureAuth start
	I1209 22:51:26.396969   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetMachineName
	I1209 22:51:26.397249   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetIP
	I1209 22:51:26.400060   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.400552   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.400587   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.400787   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.403205   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.403621   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.403649   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.403809   36778 provision.go:143] copyHostCerts
	I1209 22:51:26.403843   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:51:26.403883   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 22:51:26.403895   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:51:26.403963   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 22:51:26.404040   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:51:26.404057   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 22:51:26.404065   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:51:26.404088   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 22:51:26.404134   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:51:26.404151   36778 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 22:51:26.404158   36778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:51:26.404179   36778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 22:51:26.404226   36778 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.ha-920193-m03 san=[127.0.0.1 192.168.39.45 ha-920193-m03 localhost minikube]
	I1209 22:51:26.742826   36778 provision.go:177] copyRemoteCerts
	I1209 22:51:26.742899   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 22:51:26.742929   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.745666   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.745993   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.746025   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.746168   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.746370   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.746525   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.746673   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa Username:docker}
	I1209 22:51:26.830893   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 22:51:26.830957   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 22:51:26.856889   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 22:51:26.856964   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 22:51:26.883482   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 22:51:26.883555   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 22:51:26.908478   36778 provision.go:87] duration metric: took 511.5225ms to configureAuth
	I1209 22:51:26.908504   36778 buildroot.go:189] setting minikube options for container-runtime
	I1209 22:51:26.908720   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:51:26.908806   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:26.911525   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.911882   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:26.911910   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:26.912106   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:26.912305   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.912470   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:26.912617   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:26.912830   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:26.913029   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:26.913046   36778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 22:51:27.123000   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 22:51:27.123030   36778 main.go:141] libmachine: Checking connection to Docker...
	I1209 22:51:27.123040   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetURL
	I1209 22:51:27.124367   36778 main.go:141] libmachine: (ha-920193-m03) DBG | Using libvirt version 6000000
	I1209 22:51:27.126749   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.127125   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.127158   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.127291   36778 main.go:141] libmachine: Docker is up and running!
	I1209 22:51:27.127312   36778 main.go:141] libmachine: Reticulating splines...
	I1209 22:51:27.127327   36778 client.go:171] duration metric: took 25.938971166s to LocalClient.Create
	I1209 22:51:27.127361   36778 start.go:167] duration metric: took 25.939054874s to libmachine.API.Create "ha-920193"
	I1209 22:51:27.127375   36778 start.go:293] postStartSetup for "ha-920193-m03" (driver="kvm2")
	I1209 22:51:27.127391   36778 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 22:51:27.127417   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.127659   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 22:51:27.127685   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:27.130451   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.130869   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.130897   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.131187   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:27.131380   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.131593   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:27.131737   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa Username:docker}
	I1209 22:51:27.214943   36778 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 22:51:27.219203   36778 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 22:51:27.219230   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 22:51:27.219297   36778 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 22:51:27.219368   36778 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 22:51:27.219377   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /etc/ssl/certs/262532.pem
	I1209 22:51:27.219454   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 22:51:27.229647   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:51:27.256219   36778 start.go:296] duration metric: took 128.828108ms for postStartSetup
	I1209 22:51:27.256272   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetConfigRaw
	I1209 22:51:27.256939   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetIP
	I1209 22:51:27.259520   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.259847   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.259871   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.260187   36778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:51:27.260393   36778 start.go:128] duration metric: took 26.090950019s to createHost
	I1209 22:51:27.260418   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:27.262865   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.263234   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.263258   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.263424   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:27.263637   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.263812   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.263948   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:27.264111   36778 main.go:141] libmachine: Using SSH client type: native
	I1209 22:51:27.264266   36778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1209 22:51:27.264276   36778 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 22:51:27.367958   36778 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733784687.346724594
	
	I1209 22:51:27.367980   36778 fix.go:216] guest clock: 1733784687.346724594
	I1209 22:51:27.367990   36778 fix.go:229] Guest: 2024-12-09 22:51:27.346724594 +0000 UTC Remote: 2024-12-09 22:51:27.260405928 +0000 UTC m=+144.153092475 (delta=86.318666ms)
	I1209 22:51:27.368010   36778 fix.go:200] guest clock delta is within tolerance: 86.318666ms
	I1209 22:51:27.368017   36778 start.go:83] releasing machines lock for "ha-920193-m03", held for 26.19869273s
	I1209 22:51:27.368043   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.368295   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetIP
	I1209 22:51:27.370584   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.370886   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.370925   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.372694   36778 out.go:177] * Found network options:
	I1209 22:51:27.373916   36778 out.go:177]   - NO_PROXY=192.168.39.102,192.168.39.43
	W1209 22:51:27.375001   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	W1209 22:51:27.375023   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 22:51:27.375036   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.375488   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.375695   36778 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:51:27.375813   36778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 22:51:27.375854   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	W1209 22:51:27.375861   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	W1209 22:51:27.375898   36778 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 22:51:27.375979   36778 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 22:51:27.376001   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:51:27.378647   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.378715   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.378991   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.379016   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.379059   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:27.379077   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:27.379200   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:27.379345   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:51:27.379350   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.379608   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:27.379611   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:51:27.379810   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:51:27.379814   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa Username:docker}
	I1209 22:51:27.379979   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa Username:docker}
	I1209 22:51:27.613722   36778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 22:51:27.619553   36778 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 22:51:27.619634   36778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 22:51:27.635746   36778 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 22:51:27.635772   36778 start.go:495] detecting cgroup driver to use...
	I1209 22:51:27.635826   36778 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 22:51:27.653845   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 22:51:27.668792   36778 docker.go:217] disabling cri-docker service (if available) ...
	I1209 22:51:27.668852   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 22:51:27.683547   36778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 22:51:27.698233   36778 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 22:51:27.824917   36778 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 22:51:27.972308   36778 docker.go:233] disabling docker service ...
	I1209 22:51:27.972387   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 22:51:27.987195   36778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 22:51:28.000581   36778 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 22:51:28.137925   36778 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 22:51:28.271243   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 22:51:28.285221   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 22:51:28.303416   36778 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 22:51:28.303486   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.314415   36778 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 22:51:28.314487   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.324832   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.336511   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.346899   36778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 22:51:28.358193   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.368602   36778 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.386409   36778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:51:28.397070   36778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 22:51:28.406418   36778 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 22:51:28.406478   36778 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 22:51:28.419010   36778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 22:51:28.428601   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:51:28.547013   36778 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 22:51:28.639590   36778 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 22:51:28.639672   36778 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 22:51:28.644400   36778 start.go:563] Will wait 60s for crictl version
	I1209 22:51:28.644447   36778 ssh_runner.go:195] Run: which crictl
	I1209 22:51:28.648450   36778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 22:51:28.685819   36778 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 22:51:28.685915   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:51:28.713055   36778 ssh_runner.go:195] Run: crio --version
	I1209 22:51:28.743093   36778 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 22:51:28.744486   36778 out.go:177]   - env NO_PROXY=192.168.39.102
	I1209 22:51:28.745701   36778 out.go:177]   - env NO_PROXY=192.168.39.102,192.168.39.43
	I1209 22:51:28.746682   36778 main.go:141] libmachine: (ha-920193-m03) Calling .GetIP
	I1209 22:51:28.749397   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:28.749762   36778 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:51:28.749786   36778 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:51:28.749968   36778 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 22:51:28.754027   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:51:28.765381   36778 mustload.go:65] Loading cluster: ha-920193
	I1209 22:51:28.765606   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:51:28.765871   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:51:28.765916   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:51:28.781482   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44651
	I1209 22:51:28.781893   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:51:28.782266   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:51:28.782287   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:51:28.782526   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:51:28.782726   36778 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:51:28.784149   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:51:28.784420   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:51:28.784463   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:51:28.799758   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I1209 22:51:28.800232   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:51:28.800726   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:51:28.800752   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:51:28.801514   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:51:28.801709   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:51:28.801891   36778 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193 for IP: 192.168.39.45
	I1209 22:51:28.801903   36778 certs.go:194] generating shared ca certs ...
	I1209 22:51:28.801923   36778 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:51:28.802065   36778 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 22:51:28.802119   36778 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 22:51:28.802134   36778 certs.go:256] generating profile certs ...
	I1209 22:51:28.802225   36778 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key
	I1209 22:51:28.802259   36778 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.e2d8c66a
	I1209 22:51:28.802283   36778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.e2d8c66a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.43 192.168.39.45 192.168.39.254]
	I1209 22:51:28.918029   36778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.e2d8c66a ...
	I1209 22:51:28.918070   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.e2d8c66a: {Name:mkb9baad787ad98ea3bbef921d1279904d63e258 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:51:28.918300   36778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.e2d8c66a ...
	I1209 22:51:28.918321   36778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.e2d8c66a: {Name:mk6d0bc06f9a231b982576741314205a71ae81f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:51:28.918454   36778 certs.go:381] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.e2d8c66a -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt
	I1209 22:51:28.918653   36778 certs.go:385] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.e2d8c66a -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key
	I1209 22:51:28.918832   36778 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key
	I1209 22:51:28.918852   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 22:51:28.918869   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 22:51:28.918882   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 22:51:28.918897   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 22:51:28.918909   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 22:51:28.918920   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 22:51:28.918930   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 22:51:28.918940   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 22:51:28.918992   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 22:51:28.919020   36778 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 22:51:28.919030   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 22:51:28.919050   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 22:51:28.919071   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 22:51:28.919092   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 22:51:28.919165   36778 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:51:28.919200   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem -> /usr/share/ca-certificates/26253.pem
	I1209 22:51:28.919214   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /usr/share/ca-certificates/262532.pem
	I1209 22:51:28.919226   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:51:28.919256   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:51:28.922496   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:51:28.922907   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:51:28.922924   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:51:28.923121   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:51:28.923334   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:51:28.923493   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:51:28.923637   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:51:28.995976   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1209 22:51:29.001595   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1209 22:51:29.014651   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1209 22:51:29.018976   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1209 22:51:29.031698   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1209 22:51:29.035774   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1209 22:51:29.047740   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1209 22:51:29.055239   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1209 22:51:29.068897   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1209 22:51:29.073278   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1209 22:51:29.083471   36778 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1209 22:51:29.087771   36778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1209 22:51:29.099200   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 22:51:29.124484   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 22:51:29.146898   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 22:51:29.170925   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 22:51:29.194172   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1209 22:51:29.216851   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 22:51:29.238922   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 22:51:29.261472   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 22:51:29.285294   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 22:51:29.308795   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 22:51:29.332153   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 22:51:29.356878   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1209 22:51:29.373363   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1209 22:51:29.389889   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1209 22:51:29.406229   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1209 22:51:29.422321   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1209 22:51:29.439481   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1209 22:51:29.457534   36778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1209 22:51:29.474790   36778 ssh_runner.go:195] Run: openssl version
	I1209 22:51:29.480386   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 22:51:29.491491   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 22:51:29.496002   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 22:51:29.496065   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 22:51:29.501912   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 22:51:29.512683   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 22:51:29.523589   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:51:29.527903   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:51:29.527953   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:51:29.533408   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 22:51:29.544241   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 22:51:29.554741   36778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 22:51:29.559538   36778 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 22:51:29.559622   36778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 22:51:29.565390   36778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 22:51:29.576363   36778 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 22:51:29.580324   36778 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 22:51:29.580397   36778 kubeadm.go:934] updating node {m03 192.168.39.45 8443 v1.31.2 crio true true} ...
	I1209 22:51:29.580506   36778 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-920193-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 22:51:29.580552   36778 kube-vip.go:115] generating kube-vip config ...
	I1209 22:51:29.580597   36778 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 22:51:29.601123   36778 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 22:51:29.601198   36778 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 22:51:29.601245   36778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 22:51:29.616816   36778 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1209 22:51:29.616873   36778 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1209 22:51:29.626547   36778 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1209 22:51:29.626551   36778 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1209 22:51:29.626581   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 22:51:29.626608   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 22:51:29.626662   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 22:51:29.626551   36778 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1209 22:51:29.626680   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 22:51:29.626713   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:51:29.630710   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1209 22:51:29.630743   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1209 22:51:29.661909   36778 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 22:51:29.661957   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1209 22:51:29.661993   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1209 22:51:29.662034   36778 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 22:51:29.693387   36778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1209 22:51:29.693423   36778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1209 22:51:30.497307   36778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1209 22:51:30.507919   36778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 22:51:30.525676   36778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 22:51:30.544107   36778 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 22:51:30.560963   36778 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 22:51:30.564949   36778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 22:51:30.577803   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:51:30.711834   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:51:30.729249   36778 host.go:66] Checking if "ha-920193" exists ...
	I1209 22:51:30.729790   36778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:51:30.729852   36778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:51:30.745894   36778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40813
	I1209 22:51:30.746400   36778 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:51:30.746903   36778 main.go:141] libmachine: Using API Version  1
	I1209 22:51:30.746923   36778 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:51:30.747244   36778 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:51:30.747474   36778 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:51:30.747637   36778 start.go:317] joinCluster: &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:51:30.747751   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1209 22:51:30.747772   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:51:30.750739   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:51:30.751188   36778 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:51:30.751212   36778 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:51:30.751382   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:51:30.751610   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:51:30.751784   36778 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:51:30.751955   36778 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:51:30.921112   36778 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:51:30.921184   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uin3tt.yfutths3ueks8fx7 --discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-920193-m03 --control-plane --apiserver-advertise-address=192.168.39.45 --apiserver-bind-port=8443"
	I1209 22:51:51.979391   36778 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uin3tt.yfutths3ueks8fx7 --discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-920193-m03 --control-plane --apiserver-advertise-address=192.168.39.45 --apiserver-bind-port=8443": (21.05816353s)
	I1209 22:51:51.979426   36778 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1209 22:51:52.687851   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-920193-m03 minikube.k8s.io/updated_at=2024_12_09T22_51_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=ha-920193 minikube.k8s.io/primary=false
	I1209 22:51:52.803074   36778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-920193-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1209 22:51:52.923717   36778 start.go:319] duration metric: took 22.176073752s to joinCluster
	I1209 22:51:52.923810   36778 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 22:51:52.924248   36778 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:51:52.925117   36778 out.go:177] * Verifying Kubernetes components...
	I1209 22:51:52.927170   36778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:51:53.166362   36778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:51:53.186053   36778 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:51:53.186348   36778 kapi.go:59] client config for ha-920193: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.crt", KeyFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key", CAFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c9a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1209 22:51:53.186424   36778 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I1209 22:51:53.186669   36778 node_ready.go:35] waiting up to 6m0s for node "ha-920193-m03" to be "Ready" ...
	I1209 22:51:53.186744   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:53.186755   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:53.186774   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:53.186786   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:53.191049   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:53.686961   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:53.686986   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:53.686997   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:53.687007   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:53.691244   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:54.186985   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:54.187011   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:54.187024   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:54.187030   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:54.265267   36778 round_trippers.go:574] Response Status: 200 OK in 78 milliseconds
	I1209 22:51:54.687008   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:54.687031   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:54.687042   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:54.687050   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:54.690480   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:55.187500   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:55.187525   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:55.187535   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:55.187540   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:55.191178   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:55.191830   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:51:55.687762   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:55.687790   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:55.687802   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:55.687832   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:55.691762   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:56.187494   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:56.187516   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:56.187534   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:56.187543   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:56.191706   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:56.687665   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:56.687691   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:56.687700   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:56.687705   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:56.690707   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:51:57.187710   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:57.187731   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:57.187739   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:57.187743   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:57.191208   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:57.192244   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:51:57.687242   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:57.687266   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:57.687277   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:57.687284   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:57.692231   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:51:58.187334   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:58.187369   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:58.187404   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:58.187410   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:58.190420   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:51:58.687040   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:58.687060   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:58.687087   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:58.687092   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:58.690458   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:59.187542   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:59.187579   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:59.187590   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:59.187598   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:59.191084   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:51:59.687057   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:51:59.687079   36778 round_trippers.go:469] Request Headers:
	I1209 22:51:59.687087   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:51:59.687090   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:51:59.762365   36778 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I1209 22:51:59.763672   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:00.187782   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:00.187809   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:00.187824   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:00.187830   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:00.190992   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:00.687396   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:00.687424   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:00.687436   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:00.687443   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:00.690509   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:01.187706   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:01.187726   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:01.187735   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:01.187738   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:01.191284   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:01.687807   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:01.687830   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:01.687838   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:01.687841   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:01.692246   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:02.187139   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:02.187164   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:02.187172   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:02.187176   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:02.191262   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:02.191900   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:02.687239   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:02.687260   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:02.687268   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:02.687272   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:02.690588   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:03.186879   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:03.186901   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:03.186909   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:03.186913   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:03.190077   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:03.686945   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:03.686970   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:03.686976   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:03.686980   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:03.690246   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:04.187422   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:04.187453   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:04.187461   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:04.187475   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:04.190833   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:04.686862   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:04.686888   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:04.686895   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:04.686899   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:04.690474   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:04.691179   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:05.187647   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:05.187672   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:05.187680   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:05.187686   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:05.191042   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:05.687592   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:05.687619   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:05.687631   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:05.687638   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:05.695966   36778 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1209 22:52:06.187585   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:06.187617   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:06.187624   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:06.187627   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:06.190871   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:06.687343   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:06.687365   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:06.687372   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:06.687376   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:06.691065   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:06.691740   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:07.186885   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:07.186908   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:07.186916   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:07.186920   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:07.190452   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:07.687481   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:07.687506   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:07.687517   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:07.687522   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:07.690781   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:08.187842   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:08.187865   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:08.187873   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:08.187877   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:08.190745   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:52:08.687010   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:08.687039   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:08.687047   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:08.687050   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:08.690129   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:09.187057   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:09.187082   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:09.187100   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:09.187105   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:09.190445   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:09.191229   36778 node_ready.go:53] node "ha-920193-m03" has status "Ready":"False"
	I1209 22:52:09.687849   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:09.687877   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:09.687887   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:09.687896   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:09.691161   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:10.187009   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:10.187030   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:10.187038   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:10.187041   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:10.190809   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:10.687323   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:10.687345   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:10.687353   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:10.687356   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:10.690476   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.187726   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:11.187753   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.187765   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.187771   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.190528   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:52:11.191296   36778 node_ready.go:49] node "ha-920193-m03" has status "Ready":"True"
	I1209 22:52:11.191322   36778 node_ready.go:38] duration metric: took 18.004635224s for node "ha-920193-m03" to be "Ready" ...
	I1209 22:52:11.191347   36778 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:52:11.191433   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:11.191446   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.191457   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.191463   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.197370   36778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 22:52:11.208757   36778 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.208877   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9792g
	I1209 22:52:11.208889   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.208900   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.208908   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.213394   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:11.214171   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.214187   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.214197   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.214204   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.217611   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.218273   36778 pod_ready.go:93] pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.218301   36778 pod_ready.go:82] duration metric: took 9.507458ms for pod "coredns-7c65d6cfc9-9792g" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.218314   36778 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.218394   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pftgv
	I1209 22:52:11.218405   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.218415   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.218420   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.221934   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.223013   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.223030   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.223037   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.223041   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.226045   36778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 22:52:11.226613   36778 pod_ready.go:93] pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.226633   36778 pod_ready.go:82] duration metric: took 8.310101ms for pod "coredns-7c65d6cfc9-pftgv" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.226645   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.226713   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193
	I1209 22:52:11.226722   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.226729   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.226736   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.232210   36778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 22:52:11.233134   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.233148   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.233156   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.233159   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.236922   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.237775   36778 pod_ready.go:93] pod "etcd-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.237796   36778 pod_ready.go:82] duration metric: took 11.143234ms for pod "etcd-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.237806   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.237867   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193-m02
	I1209 22:52:11.237875   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.237882   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.237887   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.242036   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:11.242839   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:11.242858   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.242869   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.242877   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.246444   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.247204   36778 pod_ready.go:93] pod "etcd-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.247221   36778 pod_ready.go:82] duration metric: took 9.409944ms for pod "etcd-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.247231   36778 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.388592   36778 request.go:632] Waited for 141.281694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193-m03
	I1209 22:52:11.388678   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-920193-m03
	I1209 22:52:11.388690   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.388704   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.388713   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.392012   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.587869   36778 request.go:632] Waited for 195.273739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:11.587951   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:11.587957   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.587964   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.587968   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.591423   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:11.592154   36778 pod_ready.go:93] pod "etcd-ha-920193-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.592174   36778 pod_ready.go:82] duration metric: took 344.933564ms for pod "etcd-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.592194   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.788563   36778 request.go:632] Waited for 196.298723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193
	I1209 22:52:11.788656   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193
	I1209 22:52:11.788669   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.788679   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.788687   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.792940   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:11.988037   36778 request.go:632] Waited for 194.354692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.988107   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:11.988113   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:11.988121   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:11.988125   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:11.992370   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:11.992995   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:11.993012   36778 pod_ready.go:82] duration metric: took 400.807496ms for pod "kube-apiserver-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:11.993021   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.188095   36778 request.go:632] Waited for 195.006713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m02
	I1209 22:52:12.188167   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m02
	I1209 22:52:12.188172   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.188180   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.188185   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.191780   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:12.388747   36778 request.go:632] Waited for 196.170639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:12.388823   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:12.388829   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.388856   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.388869   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.392301   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:12.392894   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:12.392921   36778 pod_ready.go:82] duration metric: took 399.892746ms for pod "kube-apiserver-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.392938   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.587836   36778 request.go:632] Waited for 194.810311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m03
	I1209 22:52:12.587925   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-920193-m03
	I1209 22:52:12.587934   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.587948   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.587958   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.591021   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:12.787947   36778 request.go:632] Waited for 196.297135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:12.788010   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:12.788016   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.788024   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.788032   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.791450   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:12.792173   36778 pod_ready.go:93] pod "kube-apiserver-ha-920193-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:12.792194   36778 pod_ready.go:82] duration metric: took 399.248841ms for pod "kube-apiserver-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.792210   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:12.988330   36778 request.go:632] Waited for 196.053217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193
	I1209 22:52:12.988409   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193
	I1209 22:52:12.988415   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:12.988423   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:12.988428   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:12.992155   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.188272   36778 request.go:632] Waited for 195.156662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:13.188340   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:13.188346   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.188354   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.188362   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.192008   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.192630   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:13.192650   36778 pod_ready.go:82] duration metric: took 400.432601ms for pod "kube-controller-manager-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.192661   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.388559   36778 request.go:632] Waited for 195.821537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m02
	I1209 22:52:13.388616   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m02
	I1209 22:52:13.388621   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.388629   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.388634   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.391883   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.587935   36778 request.go:632] Waited for 195.28191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:13.587989   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:13.587994   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.588007   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.588010   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.591630   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.592151   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:13.592169   36778 pod_ready.go:82] duration metric: took 399.499137ms for pod "kube-controller-manager-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.592180   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.788332   36778 request.go:632] Waited for 196.084844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m03
	I1209 22:52:13.788412   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-920193-m03
	I1209 22:52:13.788419   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.788429   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.788435   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.792121   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.988484   36778 request.go:632] Waited for 195.461528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:13.988555   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:13.988567   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:13.988579   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:13.988589   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:13.992243   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:13.992809   36778 pod_ready.go:93] pod "kube-controller-manager-ha-920193-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:13.992827   36778 pod_ready.go:82] duration metric: took 400.64066ms for pod "kube-controller-manager-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:13.992842   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lntbt" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.187961   36778 request.go:632] Waited for 195.049639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lntbt
	I1209 22:52:14.188050   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lntbt
	I1209 22:52:14.188058   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.188071   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.188080   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.191692   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:14.388730   36778 request.go:632] Waited for 196.239352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:14.388788   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:14.388802   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.388813   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.388817   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.392311   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:14.392971   36778 pod_ready.go:93] pod "kube-proxy-lntbt" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:14.392992   36778 pod_ready.go:82] duration metric: took 400.138793ms for pod "kube-proxy-lntbt" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.393007   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pr7zk" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.588013   36778 request.go:632] Waited for 194.93384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pr7zk
	I1209 22:52:14.588077   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pr7zk
	I1209 22:52:14.588082   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.588095   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.588102   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.591447   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:14.788698   36778 request.go:632] Waited for 196.390033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:14.788766   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:14.788775   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.788787   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.788800   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.792338   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:14.793156   36778 pod_ready.go:93] pod "kube-proxy-pr7zk" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:14.793181   36778 pod_ready.go:82] duration metric: took 400.165156ms for pod "kube-proxy-pr7zk" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.793195   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r8nhm" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:14.988348   36778 request.go:632] Waited for 195.014123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r8nhm
	I1209 22:52:14.988427   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r8nhm
	I1209 22:52:14.988434   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:14.988444   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:14.988457   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:14.993239   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:15.188292   36778 request.go:632] Waited for 194.264701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:15.188390   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:15.188403   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.188418   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.188429   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.192041   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.192565   36778 pod_ready.go:93] pod "kube-proxy-r8nhm" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:15.192584   36778 pod_ready.go:82] duration metric: took 399.381952ms for pod "kube-proxy-r8nhm" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.192595   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.388147   36778 request.go:632] Waited for 195.488765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193
	I1209 22:52:15.388224   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193
	I1209 22:52:15.388233   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.388240   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.388248   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.391603   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.588758   36778 request.go:632] Waited for 196.3144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:15.588837   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193
	I1209 22:52:15.588843   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.588850   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.588860   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.592681   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.593301   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:15.593327   36778 pod_ready.go:82] duration metric: took 400.724982ms for pod "kube-scheduler-ha-920193" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.593343   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.788627   36778 request.go:632] Waited for 195.204455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m02
	I1209 22:52:15.788686   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m02
	I1209 22:52:15.788691   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.788699   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.788704   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.792349   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.988329   36778 request.go:632] Waited for 195.36216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:15.988396   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m02
	I1209 22:52:15.988402   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:15.988408   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:15.988412   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:15.991578   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:15.992400   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:15.992418   36778 pod_ready.go:82] duration metric: took 399.067203ms for pod "kube-scheduler-ha-920193-m02" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:15.992428   36778 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:16.188427   36778 request.go:632] Waited for 195.939633ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m03
	I1209 22:52:16.188480   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-920193-m03
	I1209 22:52:16.188489   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.188496   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.188501   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.192012   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:16.388006   36778 request.go:632] Waited for 195.368293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:16.388057   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-920193-m03
	I1209 22:52:16.388062   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.388069   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.388073   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.392950   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:16.393391   36778 pod_ready.go:93] pod "kube-scheduler-ha-920193-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 22:52:16.393409   36778 pod_ready.go:82] duration metric: took 400.975145ms for pod "kube-scheduler-ha-920193-m03" in "kube-system" namespace to be "Ready" ...
	I1209 22:52:16.393420   36778 pod_ready.go:39] duration metric: took 5.202056835s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 22:52:16.393435   36778 api_server.go:52] waiting for apiserver process to appear ...
	I1209 22:52:16.393482   36778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 22:52:16.409725   36778 api_server.go:72] duration metric: took 23.485873684s to wait for apiserver process to appear ...
	I1209 22:52:16.409759   36778 api_server.go:88] waiting for apiserver healthz status ...
	I1209 22:52:16.409786   36778 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I1209 22:52:16.414224   36778 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I1209 22:52:16.414307   36778 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I1209 22:52:16.414316   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.414324   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.414330   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.415229   36778 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1209 22:52:16.415280   36778 api_server.go:141] control plane version: v1.31.2
	I1209 22:52:16.415291   36778 api_server.go:131] duration metric: took 5.527187ms to wait for apiserver health ...
	I1209 22:52:16.415298   36778 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 22:52:16.588740   36778 request.go:632] Waited for 173.378808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:16.588806   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:16.588811   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.588818   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.588822   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.595459   36778 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 22:52:16.602952   36778 system_pods.go:59] 24 kube-system pods found
	I1209 22:52:16.602979   36778 system_pods.go:61] "coredns-7c65d6cfc9-9792g" [61c6326d-4d13-49b7-9fae-c4f8906c3c7e] Running
	I1209 22:52:16.602985   36778 system_pods.go:61] "coredns-7c65d6cfc9-pftgv" [eb45cffe-b666-435b-ada3-605735ee43c1] Running
	I1209 22:52:16.602989   36778 system_pods.go:61] "etcd-ha-920193" [70c28c05-bf86-4a56-9863-0e836de39230] Running
	I1209 22:52:16.602993   36778 system_pods.go:61] "etcd-ha-920193-m02" [b1746065-266b-487c-8aa5-629d5b421a3b] Running
	I1209 22:52:16.602996   36778 system_pods.go:61] "etcd-ha-920193-m03" [f9f5fa3d-3d06-47a8-b3f2-ed544b1cfb74] Running
	I1209 22:52:16.603001   36778 system_pods.go:61] "kindnet-7bbbc" [76f14f4c-3759-45b3-8161-78d85c44b5a6] Running
	I1209 22:52:16.603004   36778 system_pods.go:61] "kindnet-drj9q" [662d116b-e7ec-437c-9eeb-207cee5beecd] Running
	I1209 22:52:16.603007   36778 system_pods.go:61] "kindnet-rcctv" [e9580e78-cfcb-45b6-9e90-f37ce3881c6b] Running
	I1209 22:52:16.603010   36778 system_pods.go:61] "kube-apiserver-ha-920193" [9c15e33e-f7c1-4b0b-84de-718043e0ea1b] Running
	I1209 22:52:16.603015   36778 system_pods.go:61] "kube-apiserver-ha-920193-m02" [d05fff94-7b3f-4127-8164-3c1a838bb71c] Running
	I1209 22:52:16.603018   36778 system_pods.go:61] "kube-apiserver-ha-920193-m03" [1a55c551-c7bf-4b9f-ac49-e750cec2a92f] Running
	I1209 22:52:16.603022   36778 system_pods.go:61] "kube-controller-manager-ha-920193" [4a411c41-cba8-4cb3-beec-fcd3f9a8ade1] Running
	I1209 22:52:16.603026   36778 system_pods.go:61] "kube-controller-manager-ha-920193-m02" [57181858-4c99-4afa-b94c-4ba320fc293d] Running
	I1209 22:52:16.603031   36778 system_pods.go:61] "kube-controller-manager-ha-920193-m03" [d5278ea9-8878-4c18-bb6a-abb82a5bde12] Running
	I1209 22:52:16.603035   36778 system_pods.go:61] "kube-proxy-lntbt" [8eb5538d-7ac0-4949-b7f9-29d5530f4521] Running
	I1209 22:52:16.603038   36778 system_pods.go:61] "kube-proxy-pr7zk" [8236ddcf-f836-449c-8401-f799dd1a94f8] Running
	I1209 22:52:16.603041   36778 system_pods.go:61] "kube-proxy-r8nhm" [355570de-1c30-4aa2-a56c-a06639cc339c] Running
	I1209 22:52:16.603044   36778 system_pods.go:61] "kube-scheduler-ha-920193" [60b5be1e-f9fe-4c30-b435-7d321c484bc4] Running
	I1209 22:52:16.603047   36778 system_pods.go:61] "kube-scheduler-ha-920193-m02" [a35e79b6-6cf2-4669-95fb-c1f358c811d2] Running
	I1209 22:52:16.603050   36778 system_pods.go:61] "kube-scheduler-ha-920193-m03" [27607c05-10b3-4d82-ba5e-2f3e7a44d2ad] Running
	I1209 22:52:16.603054   36778 system_pods.go:61] "kube-vip-ha-920193" [4f238d3a-6219-4e4d-89a8-22f73def8440] Running
	I1209 22:52:16.603057   36778 system_pods.go:61] "kube-vip-ha-920193-m02" [0fe8360d-b418-47dd-bcce-a8d5a6c4c29d] Running
	I1209 22:52:16.603060   36778 system_pods.go:61] "kube-vip-ha-920193-m03" [7a747c56-20d6-41bb-b7c1-1db054ee699c] Running
	I1209 22:52:16.603062   36778 system_pods.go:61] "storage-provisioner" [cb31984d-7602-406c-8c77-ce8571cdaa52] Running
	I1209 22:52:16.603068   36778 system_pods.go:74] duration metric: took 187.765008ms to wait for pod list to return data ...
	I1209 22:52:16.603077   36778 default_sa.go:34] waiting for default service account to be created ...
	I1209 22:52:16.788510   36778 request.go:632] Waited for 185.359314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1209 22:52:16.788566   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1209 22:52:16.788571   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.788579   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.788586   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.791991   36778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 22:52:16.792139   36778 default_sa.go:45] found service account: "default"
	I1209 22:52:16.792154   36778 default_sa.go:55] duration metric: took 189.072143ms for default service account to be created ...
	I1209 22:52:16.792164   36778 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 22:52:16.988637   36778 request.go:632] Waited for 196.396881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:16.988723   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1209 22:52:16.988732   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:16.988740   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:16.988743   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:16.995659   36778 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 22:52:17.002627   36778 system_pods.go:86] 24 kube-system pods found
	I1209 22:52:17.002660   36778 system_pods.go:89] "coredns-7c65d6cfc9-9792g" [61c6326d-4d13-49b7-9fae-c4f8906c3c7e] Running
	I1209 22:52:17.002667   36778 system_pods.go:89] "coredns-7c65d6cfc9-pftgv" [eb45cffe-b666-435b-ada3-605735ee43c1] Running
	I1209 22:52:17.002672   36778 system_pods.go:89] "etcd-ha-920193" [70c28c05-bf86-4a56-9863-0e836de39230] Running
	I1209 22:52:17.002676   36778 system_pods.go:89] "etcd-ha-920193-m02" [b1746065-266b-487c-8aa5-629d5b421a3b] Running
	I1209 22:52:17.002679   36778 system_pods.go:89] "etcd-ha-920193-m03" [f9f5fa3d-3d06-47a8-b3f2-ed544b1cfb74] Running
	I1209 22:52:17.002683   36778 system_pods.go:89] "kindnet-7bbbc" [76f14f4c-3759-45b3-8161-78d85c44b5a6] Running
	I1209 22:52:17.002686   36778 system_pods.go:89] "kindnet-drj9q" [662d116b-e7ec-437c-9eeb-207cee5beecd] Running
	I1209 22:52:17.002690   36778 system_pods.go:89] "kindnet-rcctv" [e9580e78-cfcb-45b6-9e90-f37ce3881c6b] Running
	I1209 22:52:17.002693   36778 system_pods.go:89] "kube-apiserver-ha-920193" [9c15e33e-f7c1-4b0b-84de-718043e0ea1b] Running
	I1209 22:52:17.002697   36778 system_pods.go:89] "kube-apiserver-ha-920193-m02" [d05fff94-7b3f-4127-8164-3c1a838bb71c] Running
	I1209 22:52:17.002700   36778 system_pods.go:89] "kube-apiserver-ha-920193-m03" [1a55c551-c7bf-4b9f-ac49-e750cec2a92f] Running
	I1209 22:52:17.002703   36778 system_pods.go:89] "kube-controller-manager-ha-920193" [4a411c41-cba8-4cb3-beec-fcd3f9a8ade1] Running
	I1209 22:52:17.002707   36778 system_pods.go:89] "kube-controller-manager-ha-920193-m02" [57181858-4c99-4afa-b94c-4ba320fc293d] Running
	I1209 22:52:17.002710   36778 system_pods.go:89] "kube-controller-manager-ha-920193-m03" [d5278ea9-8878-4c18-bb6a-abb82a5bde12] Running
	I1209 22:52:17.002717   36778 system_pods.go:89] "kube-proxy-lntbt" [8eb5538d-7ac0-4949-b7f9-29d5530f4521] Running
	I1209 22:52:17.002720   36778 system_pods.go:89] "kube-proxy-pr7zk" [8236ddcf-f836-449c-8401-f799dd1a94f8] Running
	I1209 22:52:17.002723   36778 system_pods.go:89] "kube-proxy-r8nhm" [355570de-1c30-4aa2-a56c-a06639cc339c] Running
	I1209 22:52:17.002726   36778 system_pods.go:89] "kube-scheduler-ha-920193" [60b5be1e-f9fe-4c30-b435-7d321c484bc4] Running
	I1209 22:52:17.002730   36778 system_pods.go:89] "kube-scheduler-ha-920193-m02" [a35e79b6-6cf2-4669-95fb-c1f358c811d2] Running
	I1209 22:52:17.002734   36778 system_pods.go:89] "kube-scheduler-ha-920193-m03" [27607c05-10b3-4d82-ba5e-2f3e7a44d2ad] Running
	I1209 22:52:17.002738   36778 system_pods.go:89] "kube-vip-ha-920193" [4f238d3a-6219-4e4d-89a8-22f73def8440] Running
	I1209 22:52:17.002740   36778 system_pods.go:89] "kube-vip-ha-920193-m02" [0fe8360d-b418-47dd-bcce-a8d5a6c4c29d] Running
	I1209 22:52:17.002744   36778 system_pods.go:89] "kube-vip-ha-920193-m03" [7a747c56-20d6-41bb-b7c1-1db054ee699c] Running
	I1209 22:52:17.002747   36778 system_pods.go:89] "storage-provisioner" [cb31984d-7602-406c-8c77-ce8571cdaa52] Running
	I1209 22:52:17.002753   36778 system_pods.go:126] duration metric: took 210.583954ms to wait for k8s-apps to be running ...
	I1209 22:52:17.002760   36778 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 22:52:17.002802   36778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 22:52:17.018265   36778 system_svc.go:56] duration metric: took 15.492212ms WaitForService to wait for kubelet
	I1209 22:52:17.018301   36778 kubeadm.go:582] duration metric: took 24.09445385s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 22:52:17.018323   36778 node_conditions.go:102] verifying NodePressure condition ...
	I1209 22:52:17.188743   36778 request.go:632] Waited for 170.323133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I1209 22:52:17.188800   36778 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I1209 22:52:17.188807   36778 round_trippers.go:469] Request Headers:
	I1209 22:52:17.188816   36778 round_trippers.go:473]     Accept: application/json, */*
	I1209 22:52:17.188823   36778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 22:52:17.193008   36778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 22:52:17.194620   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:52:17.194642   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:52:17.194653   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:52:17.194657   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:52:17.194661   36778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 22:52:17.194664   36778 node_conditions.go:123] node cpu capacity is 2
	I1209 22:52:17.194668   36778 node_conditions.go:105] duration metric: took 176.339707ms to run NodePressure ...
	I1209 22:52:17.194678   36778 start.go:241] waiting for startup goroutines ...
	I1209 22:52:17.194700   36778 start.go:255] writing updated cluster config ...
	I1209 22:52:17.194994   36778 ssh_runner.go:195] Run: rm -f paused
	I1209 22:52:17.247192   36778 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 22:52:17.250117   36778 out.go:177] * Done! kubectl is now configured to use "ha-920193" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.070481983Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784975070458896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a9b897f-4d72-4976-9ef6-8ab90ab974a0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.070968719Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17bfe001-6f69-45a4-b105-31ead08c5526 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.071040403Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17bfe001-6f69-45a4-b105-31ead08c5526 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.071272231Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2098445c3438f8b0d5a49c50ab807d4692815a97d8732b20b47c76ff9381a76,PodSandboxId:32c399f593c298fd59b6593cd718d6d5d6899cacbe2ca9a451eae0cce6586d32,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733784740856215420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4dbs2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 713f8602-6e31-4d2c-b66e-ddcc856f3d96,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c,PodSandboxId:28a5e497d421cd446ab1fa053ce2a1d367f385d8d7480426d8bdb7269cb0ac01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606136476368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9792g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61c6326d-4d13-49b7-9fae-c4f8906c3c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a,PodSandboxId:8986bab4f9538153cc0450face96ec55ad84843bf30b83618639ea1d89ad002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606113487368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pftgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
eb45cffe-b666-435b-ada3-605735ee43c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a62ed3f6ca851b9e241ceaebd0beefc01fd7dbe3845409d02a901b02942a75,PodSandboxId:24f95152f109449addd385310712e35defdffe2d831c55b614cb2f59527ce4e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733784605136123919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb31984d-7602-406c-8c77-ce8571cdaa52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a,PodSandboxId:91e324c9c3171a25cf06362a62d77103a523e6d0da9b6a82fa14b606f4828c5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733784593211202860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rcctv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9580e78-cfcb-45b6-9e90-f37ce3881c6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678,PodSandboxId:7d30b07a36a6c7c6f07f44ce91d2e958657f533c11a16e501836ebf4eab9c339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733784589
863424703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8nhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355570de-1c30-4aa2-a56c-a06639cc339c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845a7a9380501aac2e0992c1b802cf4bc3bd5aea4f43d2d2d4f10c946c1c66f,PodSandboxId:dcec6011252c41298b12a101fb51c8b91c1b38ea9bc0151b6980f69527186ed1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378458116
6459615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d35c5d117675d3f6b1e3496412ccf95,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581,PodSandboxId:a053c05339f972a11da8b5a3a39e9c14b9110864ea563f08bd07327fd6e6af10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733784578334605737,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47ddf9d5365fec6250c855056c8f531,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a,PodSandboxId:7dd45ba230f9006172f49368d56abbfa8ac1f3fd32cbd4ded30df3d9e90be698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733784578293475702,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5422d6506f773ffbf1f21f835466832,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9,PodSandboxId:5b9cd68863c1403e489bdf6102dc4a79d4207d1f0150b4a356a2e0ad3bfc1b7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733784578260543093,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953f998529f26865f22e300e139c6849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963,PodSandboxId:ba6c2156966ab967451ebfb65361e3245e35dc076d6271f2db162ddc9498d085,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733784578211411837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f1665c4aeae8e7befddb7a386efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17bfe001-6f69-45a4-b105-31ead08c5526 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.106436794Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=010421ef-0061-4a98-b3ab-0cace1e7f71f name=/runtime.v1.RuntimeService/Version
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.106525597Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=010421ef-0061-4a98-b3ab-0cace1e7f71f name=/runtime.v1.RuntimeService/Version
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.107645193Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9908f640-48f4-459c-88ca-3603457a68fd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.108209713Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784975108187410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9908f640-48f4-459c-88ca-3603457a68fd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.108638265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23c95c92-f63e-464b-85cc-d69868f340eb name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.108748455Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23c95c92-f63e-464b-85cc-d69868f340eb name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.108967533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2098445c3438f8b0d5a49c50ab807d4692815a97d8732b20b47c76ff9381a76,PodSandboxId:32c399f593c298fd59b6593cd718d6d5d6899cacbe2ca9a451eae0cce6586d32,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733784740856215420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4dbs2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 713f8602-6e31-4d2c-b66e-ddcc856f3d96,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c,PodSandboxId:28a5e497d421cd446ab1fa053ce2a1d367f385d8d7480426d8bdb7269cb0ac01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606136476368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9792g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61c6326d-4d13-49b7-9fae-c4f8906c3c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a,PodSandboxId:8986bab4f9538153cc0450face96ec55ad84843bf30b83618639ea1d89ad002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606113487368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pftgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
eb45cffe-b666-435b-ada3-605735ee43c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a62ed3f6ca851b9e241ceaebd0beefc01fd7dbe3845409d02a901b02942a75,PodSandboxId:24f95152f109449addd385310712e35defdffe2d831c55b614cb2f59527ce4e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733784605136123919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb31984d-7602-406c-8c77-ce8571cdaa52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a,PodSandboxId:91e324c9c3171a25cf06362a62d77103a523e6d0da9b6a82fa14b606f4828c5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733784593211202860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rcctv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9580e78-cfcb-45b6-9e90-f37ce3881c6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678,PodSandboxId:7d30b07a36a6c7c6f07f44ce91d2e958657f533c11a16e501836ebf4eab9c339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733784589
863424703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8nhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355570de-1c30-4aa2-a56c-a06639cc339c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845a7a9380501aac2e0992c1b802cf4bc3bd5aea4f43d2d2d4f10c946c1c66f,PodSandboxId:dcec6011252c41298b12a101fb51c8b91c1b38ea9bc0151b6980f69527186ed1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378458116
6459615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d35c5d117675d3f6b1e3496412ccf95,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581,PodSandboxId:a053c05339f972a11da8b5a3a39e9c14b9110864ea563f08bd07327fd6e6af10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733784578334605737,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47ddf9d5365fec6250c855056c8f531,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a,PodSandboxId:7dd45ba230f9006172f49368d56abbfa8ac1f3fd32cbd4ded30df3d9e90be698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733784578293475702,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5422d6506f773ffbf1f21f835466832,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9,PodSandboxId:5b9cd68863c1403e489bdf6102dc4a79d4207d1f0150b4a356a2e0ad3bfc1b7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733784578260543093,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953f998529f26865f22e300e139c6849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963,PodSandboxId:ba6c2156966ab967451ebfb65361e3245e35dc076d6271f2db162ddc9498d085,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733784578211411837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f1665c4aeae8e7befddb7a386efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23c95c92-f63e-464b-85cc-d69868f340eb name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.144156375Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8016383-7b11-4891-a917-d9bc3af21547 name=/runtime.v1.RuntimeService/Version
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.144348325Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8016383-7b11-4891-a917-d9bc3af21547 name=/runtime.v1.RuntimeService/Version
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.145364834Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4403f7e5-3658-42ef-bf3a-0997507c8a2a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.145841462Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784975145817262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4403f7e5-3658-42ef-bf3a-0997507c8a2a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.146403158Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=646eba35-653b-4c32-81f8-7c4498b90f23 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.146467920Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=646eba35-653b-4c32-81f8-7c4498b90f23 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.146757520Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2098445c3438f8b0d5a49c50ab807d4692815a97d8732b20b47c76ff9381a76,PodSandboxId:32c399f593c298fd59b6593cd718d6d5d6899cacbe2ca9a451eae0cce6586d32,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733784740856215420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4dbs2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 713f8602-6e31-4d2c-b66e-ddcc856f3d96,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c,PodSandboxId:28a5e497d421cd446ab1fa053ce2a1d367f385d8d7480426d8bdb7269cb0ac01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606136476368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9792g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61c6326d-4d13-49b7-9fae-c4f8906c3c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a,PodSandboxId:8986bab4f9538153cc0450face96ec55ad84843bf30b83618639ea1d89ad002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606113487368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pftgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
eb45cffe-b666-435b-ada3-605735ee43c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a62ed3f6ca851b9e241ceaebd0beefc01fd7dbe3845409d02a901b02942a75,PodSandboxId:24f95152f109449addd385310712e35defdffe2d831c55b614cb2f59527ce4e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733784605136123919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb31984d-7602-406c-8c77-ce8571cdaa52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a,PodSandboxId:91e324c9c3171a25cf06362a62d77103a523e6d0da9b6a82fa14b606f4828c5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733784593211202860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rcctv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9580e78-cfcb-45b6-9e90-f37ce3881c6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678,PodSandboxId:7d30b07a36a6c7c6f07f44ce91d2e958657f533c11a16e501836ebf4eab9c339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733784589
863424703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8nhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355570de-1c30-4aa2-a56c-a06639cc339c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845a7a9380501aac2e0992c1b802cf4bc3bd5aea4f43d2d2d4f10c946c1c66f,PodSandboxId:dcec6011252c41298b12a101fb51c8b91c1b38ea9bc0151b6980f69527186ed1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378458116
6459615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d35c5d117675d3f6b1e3496412ccf95,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581,PodSandboxId:a053c05339f972a11da8b5a3a39e9c14b9110864ea563f08bd07327fd6e6af10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733784578334605737,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47ddf9d5365fec6250c855056c8f531,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a,PodSandboxId:7dd45ba230f9006172f49368d56abbfa8ac1f3fd32cbd4ded30df3d9e90be698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733784578293475702,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5422d6506f773ffbf1f21f835466832,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9,PodSandboxId:5b9cd68863c1403e489bdf6102dc4a79d4207d1f0150b4a356a2e0ad3bfc1b7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733784578260543093,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953f998529f26865f22e300e139c6849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963,PodSandboxId:ba6c2156966ab967451ebfb65361e3245e35dc076d6271f2db162ddc9498d085,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733784578211411837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f1665c4aeae8e7befddb7a386efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=646eba35-653b-4c32-81f8-7c4498b90f23 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.182627465Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a1351f7-c990-46b0-9c1d-756af2aa4703 name=/runtime.v1.RuntimeService/Version
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.182748743Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a1351f7-c990-46b0-9c1d-756af2aa4703 name=/runtime.v1.RuntimeService/Version
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.184114331Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2d6695e-5ea1-4d23-996d-704da317ec80 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.184535213Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784975184514240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2d6695e-5ea1-4d23-996d-704da317ec80 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.185128353Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37aca381-ec68-47e9-afe0-84be321b94ab name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.185198161Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37aca381-ec68-47e9-afe0-84be321b94ab name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 22:56:15 ha-920193 crio[663]: time="2024-12-09 22:56:15.185421190Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2098445c3438f8b0d5a49c50ab807d4692815a97d8732b20b47c76ff9381a76,PodSandboxId:32c399f593c298fd59b6593cd718d6d5d6899cacbe2ca9a451eae0cce6586d32,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733784740856215420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-4dbs2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 713f8602-6e31-4d2c-b66e-ddcc856f3d96,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c,PodSandboxId:28a5e497d421cd446ab1fa053ce2a1d367f385d8d7480426d8bdb7269cb0ac01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606136476368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9792g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61c6326d-4d13-49b7-9fae-c4f8906c3c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a,PodSandboxId:8986bab4f9538153cc0450face96ec55ad84843bf30b83618639ea1d89ad002e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733784606113487368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pftgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
eb45cffe-b666-435b-ada3-605735ee43c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a62ed3f6ca851b9e241ceaebd0beefc01fd7dbe3845409d02a901b02942a75,PodSandboxId:24f95152f109449addd385310712e35defdffe2d831c55b614cb2f59527ce4e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733784605136123919,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb31984d-7602-406c-8c77-ce8571cdaa52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a,PodSandboxId:91e324c9c3171a25cf06362a62d77103a523e6d0da9b6a82fa14b606f4828c5c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733784593211202860,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rcctv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9580e78-cfcb-45b6-9e90-f37ce3881c6b,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678,PodSandboxId:7d30b07a36a6c7c6f07f44ce91d2e958657f533c11a16e501836ebf4eab9c339,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733784589
863424703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8nhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 355570de-1c30-4aa2-a56c-a06639cc339c,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845a7a9380501aac2e0992c1b802cf4bc3bd5aea4f43d2d2d4f10c946c1c66f,PodSandboxId:dcec6011252c41298b12a101fb51c8b91c1b38ea9bc0151b6980f69527186ed1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378458116
6459615,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d35c5d117675d3f6b1e3496412ccf95,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581,PodSandboxId:a053c05339f972a11da8b5a3a39e9c14b9110864ea563f08bd07327fd6e6af10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733784578334605737,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47ddf9d5365fec6250c855056c8f531,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a,PodSandboxId:7dd45ba230f9006172f49368d56abbfa8ac1f3fd32cbd4ded30df3d9e90be698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733784578293475702,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5422d6506f773ffbf1f21f835466832,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9,PodSandboxId:5b9cd68863c1403e489bdf6102dc4a79d4207d1f0150b4a356a2e0ad3bfc1b7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733784578260543093,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 953f998529f26865f22e300e139c6849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963,PodSandboxId:ba6c2156966ab967451ebfb65361e3245e35dc076d6271f2db162ddc9498d085,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733784578211411837,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-920193,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f1665c4aeae8e7befddb7a386efc8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37aca381-ec68-47e9-afe0-84be321b94ab name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b2098445c3438       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   32c399f593c29       busybox-7dff88458-4dbs2
	14b80feac0f9a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   28a5e497d421c       coredns-7c65d6cfc9-9792g
	6bdcee2ff30bb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   8986bab4f9538       coredns-7c65d6cfc9-pftgv
	a6a62ed3f6ca8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   24f95152f1094       storage-provisioner
	d26f562ad5527       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3    6 minutes ago       Running             kindnet-cni               0                   91e324c9c3171       kindnet-rcctv
	233aa49869db4       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   7d30b07a36a6c       kube-proxy-r8nhm
	b845a7a938050       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   dcec6011252c4       kube-vip-ha-920193
	2c5a043b38715       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   a053c05339f97       kube-apiserver-ha-920193
	f0a29f1dc44e4       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   7dd45ba230f90       kube-controller-manager-ha-920193
	b8197a166eeaf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   5b9cd68863c14       etcd-ha-920193
	6ee0fecee78f0       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   ba6c2156966ab       kube-scheduler-ha-920193
	
	
	==> coredns [14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c] <==
	[INFO] 10.244.2.2:60285 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00013048s
	[INFO] 10.244.0.4:42105 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201273s
	[INFO] 10.244.0.4:33722 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003973627s
	[INFO] 10.244.0.4:50780 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003385872s
	[INFO] 10.244.0.4:46762 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000330906s
	[INFO] 10.244.0.4:41821 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099413s
	[INFO] 10.244.1.2:38814 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000240081s
	[INFO] 10.244.1.2:51472 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001124121s
	[INFO] 10.244.1.2:49496 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094508s
	[INFO] 10.244.2.2:44597 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168981s
	[INFO] 10.244.2.2:56334 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001450617s
	[INFO] 10.244.2.2:52317 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077228s
	[INFO] 10.244.0.4:57299 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133066s
	[INFO] 10.244.0.4:56277 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119106s
	[INFO] 10.244.0.4:45466 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000040838s
	[INFO] 10.244.1.2:44460 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200839s
	[INFO] 10.244.2.2:38498 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135133s
	[INFO] 10.244.2.2:50433 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00021653s
	[INFO] 10.244.2.2:49338 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098224s
	[INFO] 10.244.0.4:33757 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178322s
	[INFO] 10.244.0.4:48357 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000197259s
	[INFO] 10.244.0.4:36014 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000126459s
	[INFO] 10.244.1.2:50940 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000306385s
	[INFO] 10.244.2.2:39693 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191708s
	[INFO] 10.244.2.2:43130 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156713s
	
	
	==> coredns [6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a] <==
	[INFO] 10.244.2.2:53803 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001802154s
	[INFO] 10.244.0.4:53804 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136883s
	[INFO] 10.244.0.4:33536 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133128s
	[INFO] 10.244.0.4:40697 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109987s
	[INFO] 10.244.1.2:60686 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001746087s
	[INFO] 10.244.1.2:57981 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000176425s
	[INFO] 10.244.1.2:42922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001279s
	[INFO] 10.244.1.2:49248 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000199359s
	[INFO] 10.244.1.2:56349 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176613s
	[INFO] 10.244.2.2:37288 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194316s
	[INFO] 10.244.2.2:36807 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001853178s
	[INFO] 10.244.2.2:47892 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097133s
	[INFO] 10.244.2.2:50492 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000249713s
	[INFO] 10.244.2.2:42642 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102673s
	[INFO] 10.244.0.4:45744 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170409s
	[INFO] 10.244.1.2:36488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227015s
	[INFO] 10.244.1.2:37416 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118932s
	[INFO] 10.244.1.2:48536 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176061s
	[INFO] 10.244.2.2:47072 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110597s
	[INFO] 10.244.0.4:58052 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000268133s
	[INFO] 10.244.1.2:38540 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000277422s
	[INFO] 10.244.1.2:55804 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000232786s
	[INFO] 10.244.1.2:35281 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000214405s
	[INFO] 10.244.2.2:37415 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174588s
	[INFO] 10.244.2.2:32790 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097554s
	
	
	==> describe nodes <==
	Name:               ha-920193
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-920193
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=ha-920193
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T22_49_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 22:49:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-920193
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 22:56:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 22:52:47 +0000   Mon, 09 Dec 2024 22:49:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 22:52:47 +0000   Mon, 09 Dec 2024 22:49:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 22:52:47 +0000   Mon, 09 Dec 2024 22:49:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 22:52:47 +0000   Mon, 09 Dec 2024 22:50:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-920193
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9825096d628741caa811f99c10cc6460
	  System UUID:                9825096d-6287-41ca-a811-f99c10cc6460
	  Boot ID:                    7af2b544-54c4-4e33-8dc8-e2313bb29389
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4dbs2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 coredns-7c65d6cfc9-9792g             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m26s
	  kube-system                 coredns-7c65d6cfc9-pftgv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m26s
	  kube-system                 etcd-ha-920193                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m31s
	  kube-system                 kindnet-rcctv                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m27s
	  kube-system                 kube-apiserver-ha-920193             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-controller-manager-ha-920193    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-proxy-r8nhm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-scheduler-ha-920193             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-vip-ha-920193                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m25s  kube-proxy       
	  Normal  Starting                 6m31s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m31s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m31s  kubelet          Node ha-920193 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m31s  kubelet          Node ha-920193 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m31s  kubelet          Node ha-920193 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m27s  node-controller  Node ha-920193 event: Registered Node ha-920193 in Controller
	  Normal  NodeReady                6m11s  kubelet          Node ha-920193 status is now: NodeReady
	  Normal  RegisteredNode           5m32s  node-controller  Node ha-920193 event: Registered Node ha-920193 in Controller
	  Normal  RegisteredNode           4m18s  node-controller  Node ha-920193 event: Registered Node ha-920193 in Controller
	
	
	Name:               ha-920193-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-920193-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=ha-920193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T22_50_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 22:50:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-920193-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 22:53:29 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 09 Dec 2024 22:52:38 +0000   Mon, 09 Dec 2024 22:54:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 09 Dec 2024 22:52:38 +0000   Mon, 09 Dec 2024 22:54:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 09 Dec 2024 22:52:38 +0000   Mon, 09 Dec 2024 22:54:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 09 Dec 2024 22:52:38 +0000   Mon, 09 Dec 2024 22:54:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    ha-920193-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 418684ffa8244b8180cf28f3a347b4c2
	  System UUID:                418684ff-a824-4b81-80cf-28f3a347b4c2
	  Boot ID:                    15131626-aa5d-4727-aedd-7039ff10fa6a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rkqdv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 etcd-ha-920193-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m37s
	  kube-system                 kindnet-7bbbc                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m39s
	  kube-system                 kube-apiserver-ha-920193-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-controller-manager-ha-920193-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-proxy-lntbt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-scheduler-ha-920193-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-vip-ha-920193-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m36s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m39s (x8 over 5m40s)  kubelet          Node ha-920193-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m39s (x8 over 5m40s)  kubelet          Node ha-920193-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m39s (x7 over 5m40s)  kubelet          Node ha-920193-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-920193-m02 event: Registered Node ha-920193-m02 in Controller
	  Normal  RegisteredNode           5m32s                  node-controller  Node ha-920193-m02 event: Registered Node ha-920193-m02 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-920193-m02 event: Registered Node ha-920193-m02 in Controller
	  Normal  NodeNotReady             2m3s                   node-controller  Node ha-920193-m02 status is now: NodeNotReady
	
	
	Name:               ha-920193-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-920193-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=ha-920193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T22_51_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 22:51:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-920193-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 22:56:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 22:52:50 +0000   Mon, 09 Dec 2024 22:51:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 22:52:50 +0000   Mon, 09 Dec 2024 22:51:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 22:52:50 +0000   Mon, 09 Dec 2024 22:51:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 22:52:50 +0000   Mon, 09 Dec 2024 22:52:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.45
	  Hostname:    ha-920193-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c09ac2bcafe5487187b79c07f4dd9720
	  System UUID:                c09ac2bc-afe5-4871-87b7-9c07f4dd9720
	  Boot ID:                    1fbc2da5-2f05-4c65-92cc-ea55dc184e77
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zshqx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 etcd-ha-920193-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m24s
	  kube-system                 kindnet-drj9q                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m26s
	  kube-system                 kube-apiserver-ha-920193-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-controller-manager-ha-920193-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-pr7zk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-scheduler-ha-920193-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-vip-ha-920193-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m22s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     4m26s                  cidrAllocator    Node ha-920193-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m26s (x8 over 4m26s)  kubelet          Node ha-920193-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x8 over 4m26s)  kubelet          Node ha-920193-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x7 over 4m26s)  kubelet          Node ha-920193-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-920193-m03 event: Registered Node ha-920193-m03 in Controller
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-920193-m03 event: Registered Node ha-920193-m03 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-920193-m03 event: Registered Node ha-920193-m03 in Controller
	
	
	Name:               ha-920193-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-920193-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=ha-920193
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T22_52_56_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 22:52:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-920193-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 22:56:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 22:53:25 +0000   Mon, 09 Dec 2024 22:52:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 22:53:25 +0000   Mon, 09 Dec 2024 22:52:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 22:53:25 +0000   Mon, 09 Dec 2024 22:52:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 22:53:25 +0000   Mon, 09 Dec 2024 22:53:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.98
	  Hostname:    ha-920193-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4a2dbc042e3045febd5c0c9d1b2c22ec
	  System UUID:                4a2dbc04-2e30-45fe-bd5c-0c9d1b2c22ec
	  Boot ID:                    1261e6c2-362c-4edd-9457-2b833cda280a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4pzwv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m20s
	  kube-system                 kube-proxy-7d45n    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m14s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     3m20s                  cidrAllocator    Node ha-920193-m04 status is now: CIDRAssignmentFailed
	  Normal  Starting                 3m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m20s (x2 over 3m20s)  kubelet          Node ha-920193-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m20s (x2 over 3m20s)  kubelet          Node ha-920193-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m20s (x2 over 3m20s)  kubelet          Node ha-920193-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-920193-m04 event: Registered Node ha-920193-m04 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-920193-m04 event: Registered Node ha-920193-m04 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-920193-m04 event: Registered Node ha-920193-m04 in Controller
	  Normal  NodeReady                2m59s                  kubelet          Node ha-920193-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 9 22:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049320] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036387] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.848109] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.938823] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.563382] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.738770] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.057878] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055312] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.165760] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.148687] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.252407] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.807769] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +4.142269] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.067556] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.253709] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.082838] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.454038] kauditd_printk_skb: 21 callbacks suppressed
	[Dec 9 22:50] kauditd_printk_skb: 40 callbacks suppressed
	[ +36.675272] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9] <==
	{"level":"warn","ts":"2024-12-09T22:56:15.432815Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.442959Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.449099Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.449357Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.454879Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.457858Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.461046Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.466246Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.471584Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.476816Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.482897Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.486349Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.492743Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.499306Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.505369Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.508425Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.511162Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.514582Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.519786Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.525115Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.549440Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.573821Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.582550Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.584732Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T22:56:15.586693Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"58c3027c89d4efb6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 22:56:15 up 7 min,  0 users,  load average: 0.36, 0.26, 0.12
	Linux ha-920193 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a] <==
	I1209 22:55:44.242279       1 main.go:324] Node ha-920193-m02 has CIDR [10.244.1.0/24] 
	I1209 22:55:54.237055       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1209 22:55:54.237098       1 main.go:301] handling current node
	I1209 22:55:54.237112       1 main.go:297] Handling node with IPs: map[192.168.39.43:{}]
	I1209 22:55:54.237117       1 main.go:324] Node ha-920193-m02 has CIDR [10.244.1.0/24] 
	I1209 22:55:54.237320       1 main.go:297] Handling node with IPs: map[192.168.39.45:{}]
	I1209 22:55:54.237342       1 main.go:324] Node ha-920193-m03 has CIDR [10.244.2.0/24] 
	I1209 22:55:54.237447       1 main.go:297] Handling node with IPs: map[192.168.39.98:{}]
	I1209 22:55:54.237463       1 main.go:324] Node ha-920193-m04 has CIDR [10.244.3.0/24] 
	I1209 22:56:04.236382       1 main.go:297] Handling node with IPs: map[192.168.39.45:{}]
	I1209 22:56:04.236482       1 main.go:324] Node ha-920193-m03 has CIDR [10.244.2.0/24] 
	I1209 22:56:04.236733       1 main.go:297] Handling node with IPs: map[192.168.39.98:{}]
	I1209 22:56:04.236768       1 main.go:324] Node ha-920193-m04 has CIDR [10.244.3.0/24] 
	I1209 22:56:04.236884       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1209 22:56:04.236908       1 main.go:301] handling current node
	I1209 22:56:04.236931       1 main.go:297] Handling node with IPs: map[192.168.39.43:{}]
	I1209 22:56:04.236947       1 main.go:324] Node ha-920193-m02 has CIDR [10.244.1.0/24] 
	I1209 22:56:14.244429       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1209 22:56:14.244471       1 main.go:301] handling current node
	I1209 22:56:14.244486       1 main.go:297] Handling node with IPs: map[192.168.39.43:{}]
	I1209 22:56:14.244492       1 main.go:324] Node ha-920193-m02 has CIDR [10.244.1.0/24] 
	I1209 22:56:14.244726       1 main.go:297] Handling node with IPs: map[192.168.39.45:{}]
	I1209 22:56:14.244750       1 main.go:324] Node ha-920193-m03 has CIDR [10.244.2.0/24] 
	I1209 22:56:14.244874       1 main.go:297] Handling node with IPs: map[192.168.39.98:{}]
	I1209 22:56:14.244891       1 main.go:324] Node ha-920193-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581] <==
	W1209 22:49:43.150982       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102]
	I1209 22:49:43.152002       1 controller.go:615] quota admission added evaluator for: endpoints
	I1209 22:49:43.156330       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 22:49:43.387632       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1209 22:49:44.564732       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1209 22:49:44.579130       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1209 22:49:44.588831       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1209 22:49:48.591895       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1209 22:49:48.841334       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1209 22:52:22.354256       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36040: use of closed network connection
	E1209 22:52:22.536970       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36064: use of closed network connection
	E1209 22:52:22.712523       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36088: use of closed network connection
	E1209 22:52:22.898417       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36102: use of closed network connection
	E1209 22:52:23.071122       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36126: use of closed network connection
	E1209 22:52:23.250546       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36138: use of closed network connection
	E1209 22:52:23.423505       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36152: use of closed network connection
	E1209 22:52:23.596493       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36174: use of closed network connection
	E1209 22:52:23.770267       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36200: use of closed network connection
	E1209 22:52:24.059362       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36220: use of closed network connection
	E1209 22:52:24.222108       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36234: use of closed network connection
	E1209 22:52:24.394542       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36254: use of closed network connection
	E1209 22:52:24.570825       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36280: use of closed network connection
	E1209 22:52:24.742045       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36308: use of closed network connection
	E1209 22:52:24.918566       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36330: use of closed network connection
	W1209 22:53:53.164722       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.45]
	
	
	==> kube-controller-manager [f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a] <==
	I1209 22:52:55.696316       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	E1209 22:52:55.827513       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"d21ce5c2-c9ae-46d3-8e56-962d14b633c9\", ResourceVersion:\"913\", Generation:1, CreationTimestamp:time.Date(2024, time.December, 9, 22, 49, 45, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\\\
",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20241108-5c6d2daf\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\\
\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00247f6a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\
"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0026282e8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolume
ClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002628300), EmptyDir:(*v1.EmptyDirVolumeSource)
(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.Portworx
VolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002628318), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), Az
ureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20241108-5c6d2daf\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc00247f6c0)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarS
ource)(0xc00247f700)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:fals
e, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc00298a060), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralCont
ainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc002895a00), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002509e80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), O
verhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0027a7a80)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002895a3c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1209 22:52:55.828552       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"6fe45e3d-72f3-4c58-8284-ee89d6d57a36\", ResourceVersion:\"871\", Generation:1, CreationTimestamp:time.Date(2024, time.December, 9, 22, 49, 44, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00197c7a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\"
, Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)
(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00265ecc0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002193ae8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolume
Source)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVol
umeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002193b00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtual
DiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.31.2\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc00197c7e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Reso
urceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"
/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0026ee600), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc002860a98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0025a4880), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostA
lias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002693bd0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002860af0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled
on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1209 22:52:56.102815       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:57.678400       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:58.159889       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:58.160065       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-920193-m04"
	I1209 22:52:58.180925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:58.828069       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:52:58.908919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:05.805409       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:16.012967       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-920193-m04"
	I1209 22:53:16.013430       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:16.029012       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:17.646042       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:53:25.994489       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m04"
	I1209 22:54:12.667473       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-920193-m04"
	I1209 22:54:12.668375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m02"
	I1209 22:54:12.690072       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m02"
	I1209 22:54:12.722935       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.821273ms"
	I1209 22:54:12.724268       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.814µs"
	I1209 22:54:13.270393       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m02"
	I1209 22:54:17.915983       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-920193-m02"
	
	
	==> kube-proxy [233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 22:49:50.258403       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 22:49:50.274620       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	E1209 22:49:50.274749       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 22:49:50.309286       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 22:49:50.309340       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 22:49:50.309367       1 server_linux.go:169] "Using iptables Proxier"
	I1209 22:49:50.311514       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 22:49:50.312044       1 server.go:483] "Version info" version="v1.31.2"
	I1209 22:49:50.312073       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 22:49:50.314372       1 config.go:199] "Starting service config controller"
	I1209 22:49:50.314401       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 22:49:50.314584       1 config.go:105] "Starting endpoint slice config controller"
	I1209 22:49:50.314607       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 22:49:50.315221       1 config.go:328] "Starting node config controller"
	I1209 22:49:50.315250       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 22:49:50.415190       1 shared_informer.go:320] Caches are synced for service config
	I1209 22:49:50.415151       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 22:49:50.415308       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963] <==
	W1209 22:49:42.622383       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 22:49:42.622920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 22:49:42.673980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1209 22:49:42.674373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 22:49:42.700294       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 22:49:42.700789       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1209 22:49:44.393323       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1209 22:52:18.167059       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rkqdv\": pod busybox-7dff88458-rkqdv is already assigned to node \"ha-920193-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rkqdv" node="ha-920193-m02"
	E1209 22:52:18.167170       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c1517f25-fc19-4255-b4c6-9a02511b80c3(default/busybox-7dff88458-rkqdv) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-rkqdv"
	E1209 22:52:18.167196       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rkqdv\": pod busybox-7dff88458-rkqdv is already assigned to node \"ha-920193-m02\"" pod="default/busybox-7dff88458-rkqdv"
	I1209 22:52:18.167215       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rkqdv" node="ha-920193-m02"
	E1209 22:52:55.621239       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x5mqb\": pod kindnet-x5mqb is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-x5mqb" node="ha-920193-m04"
	E1209 22:52:55.621341       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x5mqb\": pod kindnet-x5mqb is already assigned to node \"ha-920193-m04\"" pod="kube-system/kindnet-x5mqb"
	E1209 22:52:55.648021       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-k5v9w\": pod kube-proxy-k5v9w is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-k5v9w" node="ha-920193-m04"
	E1209 22:52:55.648095       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5882629a-a929-45e4-b026-e75a2c17d56d(kube-system/kube-proxy-k5v9w) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-k5v9w"
	E1209 22:52:55.648113       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-k5v9w\": pod kube-proxy-k5v9w is already assigned to node \"ha-920193-m04\"" pod="kube-system/kube-proxy-k5v9w"
	I1209 22:52:55.648138       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-k5v9w" node="ha-920193-m04"
	E1209 22:52:55.758943       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mp7q7\": pod kube-proxy-mp7q7 is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mp7q7" node="ha-920193-m04"
	E1209 22:52:55.759080       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a4d32bae-6ec6-4338-8689-3b32518b021b(kube-system/kube-proxy-mp7q7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-mp7q7"
	E1209 22:52:55.759142       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mp7q7\": pod kube-proxy-mp7q7 is already assigned to node \"ha-920193-m04\"" pod="kube-system/kube-proxy-mp7q7"
	I1209 22:52:55.759188       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-mp7q7" node="ha-920193-m04"
	E1209 22:52:55.775999       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7d45n\": pod kube-proxy-7d45n is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7d45n" node="ha-920193-m04"
	E1209 22:52:55.776095       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7d45n\": pod kube-proxy-7d45n is already assigned to node \"ha-920193-m04\"" pod="kube-system/kube-proxy-7d45n"
	E1209 22:52:55.784854       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4pzwv\": pod kindnet-4pzwv is already assigned to node \"ha-920193-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4pzwv" node="ha-920193-m04"
	E1209 22:52:55.785146       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4pzwv\": pod kindnet-4pzwv is already assigned to node \"ha-920193-m04\"" pod="kube-system/kindnet-4pzwv"
	
	
	==> kubelet <==
	Dec 09 22:54:44 ha-920193 kubelet[1302]: E1209 22:54:44.581397    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784884581065881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:54:44 ha-920193 kubelet[1302]: E1209 22:54:44.581439    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784884581065881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:54:54 ha-920193 kubelet[1302]: E1209 22:54:54.583096    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784894582573620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:54:54 ha-920193 kubelet[1302]: E1209 22:54:54.583476    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784894582573620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:04 ha-920193 kubelet[1302]: E1209 22:55:04.587043    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784904586404563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:04 ha-920193 kubelet[1302]: E1209 22:55:04.587520    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784904586404563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:14 ha-920193 kubelet[1302]: E1209 22:55:14.590203    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784914589554601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:14 ha-920193 kubelet[1302]: E1209 22:55:14.590522    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784914589554601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:24 ha-920193 kubelet[1302]: E1209 22:55:24.593898    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784924593226467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:24 ha-920193 kubelet[1302]: E1209 22:55:24.593942    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784924593226467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:34 ha-920193 kubelet[1302]: E1209 22:55:34.596079    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784934595337713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:34 ha-920193 kubelet[1302]: E1209 22:55:34.596564    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784934595337713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:44 ha-920193 kubelet[1302]: E1209 22:55:44.520346    1302 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 09 22:55:44 ha-920193 kubelet[1302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 22:55:44 ha-920193 kubelet[1302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 22:55:44 ha-920193 kubelet[1302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 22:55:44 ha-920193 kubelet[1302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 22:55:44 ha-920193 kubelet[1302]: E1209 22:55:44.598917    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784944598396332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:44 ha-920193 kubelet[1302]: E1209 22:55:44.598999    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784944598396332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:54 ha-920193 kubelet[1302]: E1209 22:55:54.601949    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784954601550343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:55:54 ha-920193 kubelet[1302]: E1209 22:55:54.602225    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784954601550343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:56:04 ha-920193 kubelet[1302]: E1209 22:56:04.604279    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784964603929270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:56:04 ha-920193 kubelet[1302]: E1209 22:56:04.604303    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784964603929270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:56:14 ha-920193 kubelet[1302]: E1209 22:56:14.606960    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784974606470006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 22:56:14 ha-920193 kubelet[1302]: E1209 22:56:14.607003    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733784974606470006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-920193 -n ha-920193
helpers_test.go:261: (dbg) Run:  kubectl --context ha-920193 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (354.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-920193 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-920193 -v=7 --alsologtostderr
E1209 22:58:12.522862   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-920193 -v=7 --alsologtostderr: exit status 82 (2m1.825669983s)

                                                
                                                
-- stdout --
	* Stopping node "ha-920193-m04"  ...
	* Stopping node "ha-920193-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 22:56:16.587431   42018 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:56:16.587551   42018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:56:16.587586   42018 out.go:358] Setting ErrFile to fd 2...
	I1209 22:56:16.587594   42018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:56:16.587782   42018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 22:56:16.588042   42018 out.go:352] Setting JSON to false
	I1209 22:56:16.588150   42018 mustload.go:65] Loading cluster: ha-920193
	I1209 22:56:16.588535   42018 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:56:16.588653   42018 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:56:16.588853   42018 mustload.go:65] Loading cluster: ha-920193
	I1209 22:56:16.589012   42018 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:56:16.589052   42018 stop.go:39] StopHost: ha-920193-m04
	I1209 22:56:16.589502   42018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:56:16.589564   42018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:56:16.604834   42018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35499
	I1209 22:56:16.605385   42018 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:56:16.605978   42018 main.go:141] libmachine: Using API Version  1
	I1209 22:56:16.606005   42018 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:56:16.606406   42018 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:56:16.609110   42018 out.go:177] * Stopping node "ha-920193-m04"  ...
	I1209 22:56:16.610257   42018 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1209 22:56:16.610292   42018 main.go:141] libmachine: (ha-920193-m04) Calling .DriverName
	I1209 22:56:16.610514   42018 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1209 22:56:16.610541   42018 main.go:141] libmachine: (ha-920193-m04) Calling .GetSSHHostname
	I1209 22:56:16.613786   42018 main.go:141] libmachine: (ha-920193-m04) DBG | domain ha-920193-m04 has defined MAC address 52:54:00:ac:81:67 in network mk-ha-920193
	I1209 22:56:16.614333   42018 main.go:141] libmachine: (ha-920193-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:81:67", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:52:40 +0000 UTC Type:0 Mac:52:54:00:ac:81:67 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-920193-m04 Clientid:01:52:54:00:ac:81:67}
	I1209 22:56:16.614381   42018 main.go:141] libmachine: (ha-920193-m04) DBG | domain ha-920193-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:ac:81:67 in network mk-ha-920193
	I1209 22:56:16.614559   42018 main.go:141] libmachine: (ha-920193-m04) Calling .GetSSHPort
	I1209 22:56:16.614743   42018 main.go:141] libmachine: (ha-920193-m04) Calling .GetSSHKeyPath
	I1209 22:56:16.614896   42018 main.go:141] libmachine: (ha-920193-m04) Calling .GetSSHUsername
	I1209 22:56:16.615040   42018 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m04/id_rsa Username:docker}
	I1209 22:56:16.703783   42018 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1209 22:56:16.756142   42018 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1209 22:56:16.808871   42018 main.go:141] libmachine: Stopping "ha-920193-m04"...
	I1209 22:56:16.808893   42018 main.go:141] libmachine: (ha-920193-m04) Calling .GetState
	I1209 22:56:16.810322   42018 main.go:141] libmachine: (ha-920193-m04) Calling .Stop
	I1209 22:56:16.813455   42018 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 0/120
	I1209 22:56:17.959998   42018 main.go:141] libmachine: (ha-920193-m04) Calling .GetState
	I1209 22:56:17.961360   42018 main.go:141] libmachine: Machine "ha-920193-m04" was stopped.
	I1209 22:56:17.961380   42018 stop.go:75] duration metric: took 1.351127373s to stop
	I1209 22:56:17.961404   42018 stop.go:39] StopHost: ha-920193-m03
	I1209 22:56:17.961860   42018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:56:17.961919   42018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:56:17.977164   42018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41369
	I1209 22:56:17.977589   42018 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:56:17.978092   42018 main.go:141] libmachine: Using API Version  1
	I1209 22:56:17.978111   42018 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:56:17.978456   42018 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:56:17.980253   42018 out.go:177] * Stopping node "ha-920193-m03"  ...
	I1209 22:56:17.981285   42018 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1209 22:56:17.981311   42018 main.go:141] libmachine: (ha-920193-m03) Calling .DriverName
	I1209 22:56:17.981544   42018 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1209 22:56:17.981576   42018 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHHostname
	I1209 22:56:17.984559   42018 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:56:17.985008   42018 main.go:141] libmachine: (ha-920193-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:0a:7f", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:51:15 +0000 UTC Type:0 Mac:52:54:00:50:0a:7f Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:ha-920193-m03 Clientid:01:52:54:00:50:0a:7f}
	I1209 22:56:17.985031   42018 main.go:141] libmachine: (ha-920193-m03) DBG | domain ha-920193-m03 has defined IP address 192.168.39.45 and MAC address 52:54:00:50:0a:7f in network mk-ha-920193
	I1209 22:56:17.985245   42018 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHPort
	I1209 22:56:17.985428   42018 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHKeyPath
	I1209 22:56:17.985592   42018 main.go:141] libmachine: (ha-920193-m03) Calling .GetSSHUsername
	I1209 22:56:17.985733   42018 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m03/id_rsa Username:docker}
	I1209 22:56:18.066096   42018 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1209 22:56:18.118079   42018 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1209 22:56:18.171125   42018 main.go:141] libmachine: Stopping "ha-920193-m03"...
	I1209 22:56:18.171155   42018 main.go:141] libmachine: (ha-920193-m03) Calling .GetState
	I1209 22:56:18.172671   42018 main.go:141] libmachine: (ha-920193-m03) Calling .Stop
	I1209 22:56:18.176172   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 0/120
	I1209 22:56:19.177492   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 1/120
	I1209 22:56:20.178772   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 2/120
	I1209 22:56:21.180129   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 3/120
	I1209 22:56:22.181873   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 4/120
	I1209 22:56:23.183609   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 5/120
	I1209 22:56:24.185158   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 6/120
	I1209 22:56:25.186578   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 7/120
	I1209 22:56:26.188101   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 8/120
	I1209 22:56:27.189345   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 9/120
	I1209 22:56:28.191590   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 10/120
	I1209 22:56:29.192986   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 11/120
	I1209 22:56:30.194244   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 12/120
	I1209 22:56:31.195630   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 13/120
	I1209 22:56:32.197033   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 14/120
	I1209 22:56:33.198867   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 15/120
	I1209 22:56:34.200451   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 16/120
	I1209 22:56:35.201836   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 17/120
	I1209 22:56:36.203321   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 18/120
	I1209 22:56:37.204875   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 19/120
	I1209 22:56:38.206474   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 20/120
	I1209 22:56:39.208089   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 21/120
	I1209 22:56:40.210194   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 22/120
	I1209 22:56:41.211729   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 23/120
	I1209 22:56:42.213160   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 24/120
	I1209 22:56:43.214991   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 25/120
	I1209 22:56:44.216450   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 26/120
	I1209 22:56:45.217962   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 27/120
	I1209 22:56:46.219418   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 28/120
	I1209 22:56:47.220972   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 29/120
	I1209 22:56:48.222876   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 30/120
	I1209 22:56:49.224520   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 31/120
	I1209 22:56:50.225924   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 32/120
	I1209 22:56:51.227335   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 33/120
	I1209 22:56:52.228728   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 34/120
	I1209 22:56:53.230379   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 35/120
	I1209 22:56:54.231777   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 36/120
	I1209 22:56:55.233432   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 37/120
	I1209 22:56:56.234680   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 38/120
	I1209 22:56:57.235894   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 39/120
	I1209 22:56:58.237677   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 40/120
	I1209 22:56:59.239071   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 41/120
	I1209 22:57:00.240311   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 42/120
	I1209 22:57:01.241796   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 43/120
	I1209 22:57:02.243241   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 44/120
	I1209 22:57:03.245090   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 45/120
	I1209 22:57:04.246564   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 46/120
	I1209 22:57:05.247873   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 47/120
	I1209 22:57:06.249137   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 48/120
	I1209 22:57:07.250243   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 49/120
	I1209 22:57:08.251835   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 50/120
	I1209 22:57:09.253136   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 51/120
	I1209 22:57:10.254515   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 52/120
	I1209 22:57:11.255957   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 53/120
	I1209 22:57:12.257196   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 54/120
	I1209 22:57:13.258862   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 55/120
	I1209 22:57:14.260410   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 56/120
	I1209 22:57:15.261753   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 57/120
	I1209 22:57:16.263164   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 58/120
	I1209 22:57:17.264489   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 59/120
	I1209 22:57:18.266540   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 60/120
	I1209 22:57:19.268164   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 61/120
	I1209 22:57:20.269549   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 62/120
	I1209 22:57:21.270962   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 63/120
	I1209 22:57:22.272344   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 64/120
	I1209 22:57:23.273828   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 65/120
	I1209 22:57:24.275239   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 66/120
	I1209 22:57:25.276760   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 67/120
	I1209 22:57:26.278306   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 68/120
	I1209 22:57:27.280633   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 69/120
	I1209 22:57:28.282078   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 70/120
	I1209 22:57:29.283343   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 71/120
	I1209 22:57:30.284872   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 72/120
	I1209 22:57:31.286176   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 73/120
	I1209 22:57:32.287627   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 74/120
	I1209 22:57:33.289477   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 75/120
	I1209 22:57:34.290975   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 76/120
	I1209 22:57:35.292465   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 77/120
	I1209 22:57:36.293813   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 78/120
	I1209 22:57:37.295307   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 79/120
	I1209 22:57:38.296876   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 80/120
	I1209 22:57:39.298200   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 81/120
	I1209 22:57:40.299576   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 82/120
	I1209 22:57:41.301598   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 83/120
	I1209 22:57:42.302897   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 84/120
	I1209 22:57:43.304631   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 85/120
	I1209 22:57:44.306170   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 86/120
	I1209 22:57:45.307605   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 87/120
	I1209 22:57:46.309015   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 88/120
	I1209 22:57:47.310388   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 89/120
	I1209 22:57:48.312147   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 90/120
	I1209 22:57:49.313613   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 91/120
	I1209 22:57:50.315189   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 92/120
	I1209 22:57:51.316594   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 93/120
	I1209 22:57:52.318469   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 94/120
	I1209 22:57:53.320619   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 95/120
	I1209 22:57:54.322058   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 96/120
	I1209 22:57:55.323389   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 97/120
	I1209 22:57:56.324840   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 98/120
	I1209 22:57:57.326166   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 99/120
	I1209 22:57:58.327904   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 100/120
	I1209 22:57:59.329043   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 101/120
	I1209 22:58:00.330343   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 102/120
	I1209 22:58:01.331779   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 103/120
	I1209 22:58:02.333164   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 104/120
	I1209 22:58:03.334952   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 105/120
	I1209 22:58:04.336230   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 106/120
	I1209 22:58:05.338105   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 107/120
	I1209 22:58:06.339400   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 108/120
	I1209 22:58:07.340642   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 109/120
	I1209 22:58:08.342472   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 110/120
	I1209 22:58:09.343887   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 111/120
	I1209 22:58:10.345155   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 112/120
	I1209 22:58:11.346527   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 113/120
	I1209 22:58:12.348218   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 114/120
	I1209 22:58:13.350070   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 115/120
	I1209 22:58:14.351490   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 116/120
	I1209 22:58:15.352820   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 117/120
	I1209 22:58:16.354101   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 118/120
	I1209 22:58:17.355548   42018 main.go:141] libmachine: (ha-920193-m03) Waiting for machine to stop 119/120
	I1209 22:58:18.356543   42018 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1209 22:58:18.356615   42018 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1209 22:58:18.358506   42018 out.go:201] 
	W1209 22:58:18.359925   42018 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1209 22:58:18.359949   42018 out.go:270] * 
	* 
	W1209 22:58:18.362325   42018 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 22:58:18.363812   42018 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-920193 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-920193 --wait=true -v=7 --alsologtostderr
E1209 22:58:40.228746   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:00:26.332821   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:01:49.401045   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-920193 --wait=true -v=7 --alsologtostderr: (3m50.123024928s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-920193
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-920193 -n ha-920193
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-920193 logs -n 25: (1.992894993s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m02:/home/docker/cp-test_ha-920193-m03_ha-920193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m02 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m03_ha-920193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04:/home/docker/cp-test_ha-920193-m03_ha-920193-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m04 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m03_ha-920193-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-920193 cp testdata/cp-test.txt                                                | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3609340860/001/cp-test_ha-920193-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193:/home/docker/cp-test_ha-920193-m04_ha-920193.txt                       |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193 sudo cat                                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m04_ha-920193.txt                                 |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m02:/home/docker/cp-test_ha-920193-m04_ha-920193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m02 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m04_ha-920193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03:/home/docker/cp-test_ha-920193-m04_ha-920193-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m03 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m04_ha-920193-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-920193 node stop m02 -v=7                                                     | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-920193 node start m02 -v=7                                                    | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-920193 -v=7                                                           | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-920193 -v=7                                                                | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-920193 --wait=true -v=7                                                    | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:58 UTC | 09 Dec 24 23:02 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-920193                                                                | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 23:02 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 22:58:18
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 22:58:18.412593   42483 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:58:18.412730   42483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:58:18.412740   42483 out.go:358] Setting ErrFile to fd 2...
	I1209 22:58:18.412745   42483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:58:18.412942   42483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 22:58:18.413530   42483 out.go:352] Setting JSON to false
	I1209 22:58:18.414481   42483 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6049,"bootTime":1733779049,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 22:58:18.414575   42483 start.go:139] virtualization: kvm guest
	I1209 22:58:18.417664   42483 out.go:177] * [ha-920193] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 22:58:18.419191   42483 notify.go:220] Checking for updates...
	I1209 22:58:18.419218   42483 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 22:58:18.420764   42483 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 22:58:18.422089   42483 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:58:18.423362   42483 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:58:18.424528   42483 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 22:58:18.425856   42483 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 22:58:18.427603   42483 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:58:18.427738   42483 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 22:58:18.428242   42483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:58:18.428294   42483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:58:18.443344   42483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33757
	I1209 22:58:18.443857   42483 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:58:18.444510   42483 main.go:141] libmachine: Using API Version  1
	I1209 22:58:18.444532   42483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:58:18.444895   42483 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:58:18.445090   42483 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:58:18.483684   42483 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 22:58:18.485008   42483 start.go:297] selected driver: kvm2
	I1209 22:58:18.485025   42483 start.go:901] validating driver "kvm2" against &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.98 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false def
ault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:58:18.485160   42483 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 22:58:18.485509   42483 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:58:18.485596   42483 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 22:58:18.501855   42483 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 22:58:18.502684   42483 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 22:58:18.502726   42483 cni.go:84] Creating CNI manager for ""
	I1209 22:58:18.502766   42483 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1209 22:58:18.502831   42483 start.go:340] cluster config:
	{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.98 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fal
se headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:58:18.502958   42483 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:58:18.504799   42483 out.go:177] * Starting "ha-920193" primary control-plane node in "ha-920193" cluster
	I1209 22:58:18.506097   42483 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:58:18.506135   42483 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 22:58:18.506143   42483 cache.go:56] Caching tarball of preloaded images
	I1209 22:58:18.506222   42483 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 22:58:18.506233   42483 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 22:58:18.506342   42483 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:58:18.506583   42483 start.go:360] acquireMachinesLock for ha-920193: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 22:58:18.506627   42483 start.go:364] duration metric: took 25.012µs to acquireMachinesLock for "ha-920193"
	I1209 22:58:18.506650   42483 start.go:96] Skipping create...Using existing machine configuration
	I1209 22:58:18.506660   42483 fix.go:54] fixHost starting: 
	I1209 22:58:18.506912   42483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:58:18.506941   42483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:58:18.522186   42483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I1209 22:58:18.522661   42483 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:58:18.523105   42483 main.go:141] libmachine: Using API Version  1
	I1209 22:58:18.523130   42483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:58:18.523548   42483 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:58:18.523752   42483 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:58:18.523924   42483 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:58:18.525512   42483 fix.go:112] recreateIfNeeded on ha-920193: state=Running err=<nil>
	W1209 22:58:18.525534   42483 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 22:58:18.527721   42483 out.go:177] * Updating the running kvm2 "ha-920193" VM ...
	I1209 22:58:18.529056   42483 machine.go:93] provisionDockerMachine start ...
	I1209 22:58:18.529097   42483 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:58:18.529278   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:58:18.531835   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.532261   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:58:18.532298   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.532425   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:58:18.532570   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:58:18.532709   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:58:18.532805   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:58:18.532952   42483 main.go:141] libmachine: Using SSH client type: native
	I1209 22:58:18.533186   42483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:58:18.533200   42483 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 22:58:18.632292   42483 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-920193
	
	I1209 22:58:18.632341   42483 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:58:18.632586   42483 buildroot.go:166] provisioning hostname "ha-920193"
	I1209 22:58:18.632608   42483 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:58:18.632808   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:58:18.635953   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.636377   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:58:18.636403   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.636585   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:58:18.636749   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:58:18.636929   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:58:18.637065   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:58:18.637237   42483 main.go:141] libmachine: Using SSH client type: native
	I1209 22:58:18.637402   42483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:58:18.637414   42483 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-920193 && echo "ha-920193" | sudo tee /etc/hostname
	I1209 22:58:18.766542   42483 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-920193
	
	I1209 22:58:18.766571   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:58:18.769061   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.769409   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:58:18.769436   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.769571   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:58:18.769738   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:58:18.769892   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:58:18.770019   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:58:18.770172   42483 main.go:141] libmachine: Using SSH client type: native
	I1209 22:58:18.770373   42483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:58:18.770390   42483 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-920193' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-920193/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-920193' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 22:58:18.872453   42483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:58:18.872489   42483 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 22:58:18.872508   42483 buildroot.go:174] setting up certificates
	I1209 22:58:18.872546   42483 provision.go:84] configureAuth start
	I1209 22:58:18.872558   42483 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:58:18.872816   42483 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:58:18.875181   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.875540   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:58:18.875584   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.875704   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:58:18.877692   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.878056   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:58:18.878088   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.878208   42483 provision.go:143] copyHostCerts
	I1209 22:58:18.878237   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:58:18.878274   42483 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 22:58:18.878285   42483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:58:18.878382   42483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 22:58:18.878464   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:58:18.878483   42483 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 22:58:18.878490   42483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:58:18.878515   42483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 22:58:18.878557   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:58:18.878573   42483 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 22:58:18.878580   42483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:58:18.878602   42483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 22:58:18.878654   42483 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.ha-920193 san=[127.0.0.1 192.168.39.102 ha-920193 localhost minikube]
	I1209 22:58:18.992132   42483 provision.go:177] copyRemoteCerts
	I1209 22:58:18.992195   42483 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 22:58:18.992217   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:58:18.994915   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.995289   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:58:18.995319   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.995510   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:58:18.995685   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:58:18.995880   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:58:18.996021   42483 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:58:19.073536   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 22:58:19.073637   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 22:58:19.097947   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 22:58:19.098031   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 22:58:19.122484   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 22:58:19.122548   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1209 22:58:19.147375   42483 provision.go:87] duration metric: took 274.816586ms to configureAuth
	I1209 22:58:19.147401   42483 buildroot.go:189] setting minikube options for container-runtime
	I1209 22:58:19.147609   42483 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:58:19.147717   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:58:19.150326   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:19.150654   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:58:19.150674   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:19.150827   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:58:19.150987   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:58:19.151124   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:58:19.151272   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:58:19.151410   42483 main.go:141] libmachine: Using SSH client type: native
	I1209 22:58:19.151575   42483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:58:19.151595   42483 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 22:59:50.052506   42483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 22:59:50.052532   42483 machine.go:96] duration metric: took 1m31.523465247s to provisionDockerMachine
	I1209 22:59:50.052545   42483 start.go:293] postStartSetup for "ha-920193" (driver="kvm2")
	I1209 22:59:50.052555   42483 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 22:59:50.052570   42483 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:59:50.052918   42483 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 22:59:50.052943   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:59:50.056112   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.056556   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:59:50.056581   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.056714   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:59:50.056909   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:59:50.057045   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:59:50.057209   42483 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:59:50.133596   42483 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 22:59:50.137820   42483 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 22:59:50.137855   42483 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 22:59:50.137930   42483 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 22:59:50.138043   42483 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 22:59:50.138059   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /etc/ssl/certs/262532.pem
	I1209 22:59:50.138148   42483 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 22:59:50.147276   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:59:50.170622   42483 start.go:296] duration metric: took 118.063431ms for postStartSetup
	I1209 22:59:50.170667   42483 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:59:50.170964   42483 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1209 22:59:50.170989   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:59:50.173534   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.173893   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:59:50.173922   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.174063   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:59:50.174215   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:59:50.174444   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:59:50.174559   42483 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	W1209 22:59:50.253826   42483 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1209 22:59:50.253863   42483 fix.go:56] duration metric: took 1m31.747202014s for fixHost
	I1209 22:59:50.253885   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:59:50.256297   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.256584   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:59:50.256612   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.256748   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:59:50.256906   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:59:50.257060   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:59:50.257225   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:59:50.257388   42483 main.go:141] libmachine: Using SSH client type: native
	I1209 22:59:50.257549   42483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:59:50.257560   42483 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 22:59:50.352208   42483 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733785190.329555923
	
	I1209 22:59:50.352234   42483 fix.go:216] guest clock: 1733785190.329555923
	I1209 22:59:50.352243   42483 fix.go:229] Guest: 2024-12-09 22:59:50.329555923 +0000 UTC Remote: 2024-12-09 22:59:50.253871451 +0000 UTC m=+91.879428390 (delta=75.684472ms)
	I1209 22:59:50.352315   42483 fix.go:200] guest clock delta is within tolerance: 75.684472ms
	I1209 22:59:50.352327   42483 start.go:83] releasing machines lock for "ha-920193", held for 1m31.845689548s
	I1209 22:59:50.352354   42483 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:59:50.352648   42483 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:59:50.355316   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.355651   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:59:50.355685   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.355862   42483 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:59:50.356346   42483 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:59:50.356502   42483 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:59:50.356587   42483 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 22:59:50.356637   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:59:50.356673   42483 ssh_runner.go:195] Run: cat /version.json
	I1209 22:59:50.356698   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:59:50.359459   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.359662   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.359900   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:59:50.359926   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.360059   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:59:50.360087   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.360093   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:59:50.360219   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:59:50.360286   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:59:50.360347   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:59:50.360395   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:59:50.360445   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:59:50.360497   42483 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:59:50.360527   42483 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:59:50.432748   42483 ssh_runner.go:195] Run: systemctl --version
	I1209 22:59:50.455648   42483 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 22:59:50.610300   42483 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 22:59:50.616345   42483 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 22:59:50.616405   42483 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 22:59:50.624936   42483 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 22:59:50.624957   42483 start.go:495] detecting cgroup driver to use...
	I1209 22:59:50.625014   42483 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 22:59:50.642361   42483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 22:59:50.656242   42483 docker.go:217] disabling cri-docker service (if available) ...
	I1209 22:59:50.656308   42483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 22:59:50.670092   42483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 22:59:50.683516   42483 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 22:59:50.832435   42483 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 22:59:50.978376   42483 docker.go:233] disabling docker service ...
	I1209 22:59:50.978451   42483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 22:59:50.995396   42483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 22:59:51.009430   42483 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 22:59:51.161400   42483 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 22:59:51.306951   42483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 22:59:51.320869   42483 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 22:59:51.338909   42483 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 22:59:51.338966   42483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:59:51.349089   42483 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 22:59:51.349161   42483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:59:51.359353   42483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:59:51.369439   42483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:59:51.379616   42483 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 22:59:51.390640   42483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:59:51.400824   42483 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:59:51.411373   42483 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:59:51.421726   42483 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 22:59:51.430844   42483 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 22:59:51.440646   42483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:59:51.602831   42483 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 22:59:51.830633   42483 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 22:59:51.830694   42483 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 22:59:51.835756   42483 start.go:563] Will wait 60s for crictl version
	I1209 22:59:51.835807   42483 ssh_runner.go:195] Run: which crictl
	I1209 22:59:51.839353   42483 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 22:59:51.874837   42483 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 22:59:51.874916   42483 ssh_runner.go:195] Run: crio --version
	I1209 22:59:51.904570   42483 ssh_runner.go:195] Run: crio --version
	I1209 22:59:51.934503   42483 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 22:59:51.936210   42483 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:59:51.938717   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:51.939046   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:59:51.939075   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:51.939254   42483 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 22:59:51.944059   42483 kubeadm.go:883] updating cluster {Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.98 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storag
eclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 22:59:51.944197   42483 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:59:51.944236   42483 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 22:59:51.993907   42483 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 22:59:51.993932   42483 crio.go:433] Images already preloaded, skipping extraction
	I1209 22:59:51.993998   42483 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 22:59:52.034846   42483 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 22:59:52.034874   42483 cache_images.go:84] Images are preloaded, skipping loading
	I1209 22:59:52.034885   42483 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.31.2 crio true true} ...
	I1209 22:59:52.035018   42483 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-920193 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 22:59:52.035104   42483 ssh_runner.go:195] Run: crio config
	I1209 22:59:52.081841   42483 cni.go:84] Creating CNI manager for ""
	I1209 22:59:52.081859   42483 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1209 22:59:52.081867   42483 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 22:59:52.081893   42483 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-920193 NodeName:ha-920193 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 22:59:52.082020   42483 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-920193"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.102"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 22:59:52.082047   42483 kube-vip.go:115] generating kube-vip config ...
	I1209 22:59:52.082086   42483 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 22:59:52.093583   42483 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 22:59:52.093689   42483 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 22:59:52.093748   42483 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 22:59:52.103254   42483 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 22:59:52.103314   42483 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1209 22:59:52.112847   42483 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1209 22:59:52.129331   42483 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 22:59:52.146516   42483 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1209 22:59:52.163522   42483 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 22:59:52.182237   42483 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 22:59:52.186084   42483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:59:52.336356   42483 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:59:52.351388   42483 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193 for IP: 192.168.39.102
	I1209 22:59:52.351412   42483 certs.go:194] generating shared ca certs ...
	I1209 22:59:52.351429   42483 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:59:52.351613   42483 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 22:59:52.351666   42483 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 22:59:52.351680   42483 certs.go:256] generating profile certs ...
	I1209 22:59:52.351790   42483 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key
	I1209 22:59:52.351827   42483 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.d8ce0dbc
	I1209 22:59:52.351850   42483 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.d8ce0dbc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.43 192.168.39.45 192.168.39.254]
	I1209 22:59:52.428226   42483 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.d8ce0dbc ...
	I1209 22:59:52.428252   42483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.d8ce0dbc: {Name:mkc7bb7c7b8e01f95f235c9711f9c9c93e6e2550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:59:52.428415   42483 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.d8ce0dbc ...
	I1209 22:59:52.428427   42483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.d8ce0dbc: {Name:mka1046e76e982196ecdc2eb0d77c4a07b1dbe34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:59:52.428496   42483 certs.go:381] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.d8ce0dbc -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt
	I1209 22:59:52.428644   42483 certs.go:385] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.d8ce0dbc -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key
	I1209 22:59:52.428765   42483 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key
	I1209 22:59:52.428779   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 22:59:52.428791   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 22:59:52.428806   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 22:59:52.428821   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 22:59:52.428833   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 22:59:52.428852   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 22:59:52.428863   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 22:59:52.428876   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 22:59:52.428928   42483 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 22:59:52.428954   42483 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 22:59:52.428963   42483 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 22:59:52.428986   42483 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 22:59:52.429007   42483 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 22:59:52.429029   42483 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 22:59:52.429066   42483 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:59:52.429093   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem -> /usr/share/ca-certificates/26253.pem
	I1209 22:59:52.429106   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /usr/share/ca-certificates/262532.pem
	I1209 22:59:52.429118   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:59:52.429669   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 22:59:52.454421   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 22:59:52.479165   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 22:59:52.502636   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 22:59:52.525999   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1209 22:59:52.549503   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 22:59:52.575002   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 22:59:52.598502   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 22:59:52.621566   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 22:59:52.645812   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 22:59:52.668854   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 22:59:52.691401   42483 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 22:59:52.707802   42483 ssh_runner.go:195] Run: openssl version
	I1209 22:59:52.713695   42483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 22:59:52.724315   42483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 22:59:52.728590   42483 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 22:59:52.728652   42483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 22:59:52.733965   42483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 22:59:52.743892   42483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 22:59:52.754704   42483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 22:59:52.758861   42483 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 22:59:52.758904   42483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 22:59:52.764380   42483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 22:59:52.773686   42483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 22:59:52.785209   42483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:59:52.789697   42483 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:59:52.789745   42483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:59:52.795553   42483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 22:59:52.804829   42483 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 22:59:52.809130   42483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 22:59:52.814609   42483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 22:59:52.820098   42483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 22:59:52.825515   42483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 22:59:52.831089   42483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 22:59:52.836385   42483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 22:59:52.841696   42483 kubeadm.go:392] StartCluster: {Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.98 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagecl
ass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:59:52.841808   42483 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 22:59:52.841857   42483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 22:59:52.880781   42483 cri.go:89] found id: "18d5433d683b5f983c0b88682825cbff38c24767c98a7a2a107a54d81f3949aa"
	I1209 22:59:52.880800   42483 cri.go:89] found id: "acf0698e169715e43d83cf421d42f9e817aca897ce21fdda7308883df102fe59"
	I1209 22:59:52.880804   42483 cri.go:89] found id: "9c35bdf23f46681c1892764f5d7c07785fcfe5e74369cb517dd4feb6dd774790"
	I1209 22:59:52.880807   42483 cri.go:89] found id: "14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c"
	I1209 22:59:52.880809   42483 cri.go:89] found id: "6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a"
	I1209 22:59:52.880812   42483 cri.go:89] found id: "a6a62ed3f6ca851b9e241ceaebd0beefc01fd7dbe3845409d02a901b02942a75"
	I1209 22:59:52.880814   42483 cri.go:89] found id: "d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a"
	I1209 22:59:52.880816   42483 cri.go:89] found id: "233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678"
	I1209 22:59:52.880819   42483 cri.go:89] found id: "b845a7a9380501aac2e0992c1b802cf4bc3bd5aea4f43d2d2d4f10c946c1c66f"
	I1209 22:59:52.880835   42483 cri.go:89] found id: "2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581"
	I1209 22:59:52.880838   42483 cri.go:89] found id: "f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a"
	I1209 22:59:52.880841   42483 cri.go:89] found id: "b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9"
	I1209 22:59:52.880843   42483 cri.go:89] found id: "6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963"
	I1209 22:59:52.880846   42483 cri.go:89] found id: ""
	I1209 22:59:52.880888   42483 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-920193 -n ha-920193
helpers_test.go:261: (dbg) Run:  kubectl --context ha-920193 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (354.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 stop -v=7 --alsologtostderr
E1209 23:03:12.522640   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-920193 stop -v=7 --alsologtostderr: exit status 82 (2m0.466342161s)

                                                
                                                
-- stdout --
	* Stopping node "ha-920193-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:02:28.267440   44226 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:02:28.267556   44226 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:02:28.267584   44226 out.go:358] Setting ErrFile to fd 2...
	I1209 23:02:28.267592   44226 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:02:28.267787   44226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 23:02:28.268036   44226 out.go:352] Setting JSON to false
	I1209 23:02:28.268108   44226 mustload.go:65] Loading cluster: ha-920193
	I1209 23:02:28.268470   44226 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:02:28.268553   44226 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 23:02:28.268728   44226 mustload.go:65] Loading cluster: ha-920193
	I1209 23:02:28.268852   44226 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:02:28.268888   44226 stop.go:39] StopHost: ha-920193-m04
	I1209 23:02:28.269228   44226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:02:28.269262   44226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:02:28.284090   44226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41499
	I1209 23:02:28.284518   44226 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:02:28.285068   44226 main.go:141] libmachine: Using API Version  1
	I1209 23:02:28.285091   44226 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:02:28.285454   44226 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:02:28.287785   44226 out.go:177] * Stopping node "ha-920193-m04"  ...
	I1209 23:02:28.288952   44226 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1209 23:02:28.288988   44226 main.go:141] libmachine: (ha-920193-m04) Calling .DriverName
	I1209 23:02:28.289214   44226 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1209 23:02:28.289238   44226 main.go:141] libmachine: (ha-920193-m04) Calling .GetSSHHostname
	I1209 23:02:28.292517   44226 main.go:141] libmachine: (ha-920193-m04) DBG | domain ha-920193-m04 has defined MAC address 52:54:00:ac:81:67 in network mk-ha-920193
	I1209 23:02:28.292953   44226 main.go:141] libmachine: (ha-920193-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:81:67", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-10 00:01:56 +0000 UTC Type:0 Mac:52:54:00:ac:81:67 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-920193-m04 Clientid:01:52:54:00:ac:81:67}
	I1209 23:02:28.292984   44226 main.go:141] libmachine: (ha-920193-m04) DBG | domain ha-920193-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:ac:81:67 in network mk-ha-920193
	I1209 23:02:28.293168   44226 main.go:141] libmachine: (ha-920193-m04) Calling .GetSSHPort
	I1209 23:02:28.293374   44226 main.go:141] libmachine: (ha-920193-m04) Calling .GetSSHKeyPath
	I1209 23:02:28.293540   44226 main.go:141] libmachine: (ha-920193-m04) Calling .GetSSHUsername
	I1209 23:02:28.293697   44226 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193-m04/id_rsa Username:docker}
	I1209 23:02:28.381957   44226 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1209 23:02:28.434952   44226 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1209 23:02:28.487314   44226 main.go:141] libmachine: Stopping "ha-920193-m04"...
	I1209 23:02:28.487346   44226 main.go:141] libmachine: (ha-920193-m04) Calling .GetState
	I1209 23:02:28.488938   44226 main.go:141] libmachine: (ha-920193-m04) Calling .Stop
	I1209 23:02:28.492235   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 0/120
	I1209 23:02:29.493658   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 1/120
	I1209 23:02:30.495218   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 2/120
	I1209 23:02:31.496426   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 3/120
	I1209 23:02:32.497957   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 4/120
	I1209 23:02:33.499872   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 5/120
	I1209 23:02:34.502101   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 6/120
	I1209 23:02:35.503342   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 7/120
	I1209 23:02:36.505199   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 8/120
	I1209 23:02:37.506622   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 9/120
	I1209 23:02:38.508733   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 10/120
	I1209 23:02:39.510355   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 11/120
	I1209 23:02:40.512160   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 12/120
	I1209 23:02:41.513994   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 13/120
	I1209 23:02:42.515334   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 14/120
	I1209 23:02:43.516991   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 15/120
	I1209 23:02:44.518482   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 16/120
	I1209 23:02:45.520414   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 17/120
	I1209 23:02:46.521906   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 18/120
	I1209 23:02:47.523329   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 19/120
	I1209 23:02:48.525718   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 20/120
	I1209 23:02:49.527076   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 21/120
	I1209 23:02:50.528321   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 22/120
	I1209 23:02:51.529579   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 23/120
	I1209 23:02:52.530734   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 24/120
	I1209 23:02:53.532488   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 25/120
	I1209 23:02:54.533871   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 26/120
	I1209 23:02:55.535330   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 27/120
	I1209 23:02:56.536631   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 28/120
	I1209 23:02:57.538075   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 29/120
	I1209 23:02:58.540306   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 30/120
	I1209 23:02:59.541977   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 31/120
	I1209 23:03:00.543297   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 32/120
	I1209 23:03:01.544575   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 33/120
	I1209 23:03:02.545960   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 34/120
	I1209 23:03:03.547608   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 35/120
	I1209 23:03:04.548849   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 36/120
	I1209 23:03:05.550799   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 37/120
	I1209 23:03:06.552027   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 38/120
	I1209 23:03:07.553249   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 39/120
	I1209 23:03:08.555211   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 40/120
	I1209 23:03:09.556350   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 41/120
	I1209 23:03:10.558006   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 42/120
	I1209 23:03:11.559389   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 43/120
	I1209 23:03:12.560750   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 44/120
	I1209 23:03:13.562841   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 45/120
	I1209 23:03:14.564092   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 46/120
	I1209 23:03:15.566130   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 47/120
	I1209 23:03:16.567423   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 48/120
	I1209 23:03:17.568951   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 49/120
	I1209 23:03:18.570760   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 50/120
	I1209 23:03:19.572171   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 51/120
	I1209 23:03:20.574066   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 52/120
	I1209 23:03:21.576453   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 53/120
	I1209 23:03:22.578316   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 54/120
	I1209 23:03:23.579757   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 55/120
	I1209 23:03:24.581253   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 56/120
	I1209 23:03:25.582660   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 57/120
	I1209 23:03:26.584230   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 58/120
	I1209 23:03:27.585653   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 59/120
	I1209 23:03:28.587529   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 60/120
	I1209 23:03:29.588693   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 61/120
	I1209 23:03:30.589881   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 62/120
	I1209 23:03:31.591211   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 63/120
	I1209 23:03:32.592460   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 64/120
	I1209 23:03:33.594101   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 65/120
	I1209 23:03:34.595548   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 66/120
	I1209 23:03:35.596761   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 67/120
	I1209 23:03:36.597966   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 68/120
	I1209 23:03:37.599172   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 69/120
	I1209 23:03:38.601264   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 70/120
	I1209 23:03:39.602504   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 71/120
	I1209 23:03:40.604598   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 72/120
	I1209 23:03:41.606020   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 73/120
	I1209 23:03:42.607186   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 74/120
	I1209 23:03:43.608960   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 75/120
	I1209 23:03:44.610211   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 76/120
	I1209 23:03:45.611435   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 77/120
	I1209 23:03:46.612654   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 78/120
	I1209 23:03:47.613868   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 79/120
	I1209 23:03:48.616077   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 80/120
	I1209 23:03:49.617950   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 81/120
	I1209 23:03:50.619265   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 82/120
	I1209 23:03:51.620714   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 83/120
	I1209 23:03:52.622028   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 84/120
	I1209 23:03:53.623810   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 85/120
	I1209 23:03:54.625039   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 86/120
	I1209 23:03:55.626526   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 87/120
	I1209 23:03:56.627840   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 88/120
	I1209 23:03:57.629967   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 89/120
	I1209 23:03:58.632111   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 90/120
	I1209 23:03:59.633619   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 91/120
	I1209 23:04:00.634944   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 92/120
	I1209 23:04:01.636429   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 93/120
	I1209 23:04:02.637812   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 94/120
	I1209 23:04:03.639766   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 95/120
	I1209 23:04:04.642445   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 96/120
	I1209 23:04:05.643789   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 97/120
	I1209 23:04:06.645357   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 98/120
	I1209 23:04:07.646756   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 99/120
	I1209 23:04:08.648719   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 100/120
	I1209 23:04:09.650044   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 101/120
	I1209 23:04:10.651339   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 102/120
	I1209 23:04:11.653069   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 103/120
	I1209 23:04:12.654163   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 104/120
	I1209 23:04:13.655713   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 105/120
	I1209 23:04:14.657201   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 106/120
	I1209 23:04:15.658527   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 107/120
	I1209 23:04:16.660171   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 108/120
	I1209 23:04:17.661993   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 109/120
	I1209 23:04:18.664098   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 110/120
	I1209 23:04:19.665901   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 111/120
	I1209 23:04:20.667475   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 112/120
	I1209 23:04:21.668849   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 113/120
	I1209 23:04:22.670116   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 114/120
	I1209 23:04:23.672095   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 115/120
	I1209 23:04:24.673594   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 116/120
	I1209 23:04:25.675929   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 117/120
	I1209 23:04:26.677419   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 118/120
	I1209 23:04:27.679213   44226 main.go:141] libmachine: (ha-920193-m04) Waiting for machine to stop 119/120
	I1209 23:04:28.680521   44226 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1209 23:04:28.680586   44226 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1209 23:04:28.682904   44226 out.go:201] 
	W1209 23:04:28.684924   44226 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1209 23:04:28.684944   44226 out.go:270] * 
	* 
	W1209 23:04:28.686965   44226 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 23:04:28.688245   44226 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-920193 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr: (18.987261983s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-920193 -n ha-920193
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-920193 logs -n 25: (1.795088337s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-920193 ssh -n ha-920193-m02 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m03_ha-920193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04:/home/docker/cp-test_ha-920193-m03_ha-920193-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m04 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m03_ha-920193-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-920193 cp testdata/cp-test.txt                                                | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3609340860/001/cp-test_ha-920193-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193:/home/docker/cp-test_ha-920193-m04_ha-920193.txt                       |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193 sudo cat                                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m04_ha-920193.txt                                 |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m02:/home/docker/cp-test_ha-920193-m04_ha-920193-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m02 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m04_ha-920193-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m03:/home/docker/cp-test_ha-920193-m04_ha-920193-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n                                                                 | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | ha-920193-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-920193 ssh -n ha-920193-m03 sudo cat                                          | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC | 09 Dec 24 22:53 UTC |
	|         | /home/docker/cp-test_ha-920193-m04_ha-920193-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-920193 node stop m02 -v=7                                                     | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-920193 node start m02 -v=7                                                    | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-920193 -v=7                                                           | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-920193 -v=7                                                                | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-920193 --wait=true -v=7                                                    | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 22:58 UTC | 09 Dec 24 23:02 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-920193                                                                | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 23:02 UTC |                     |
	| node    | ha-920193 node delete m03 -v=7                                                   | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 23:02 UTC | 09 Dec 24 23:02 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-920193 stop -v=7                                                              | ha-920193 | jenkins | v1.34.0 | 09 Dec 24 23:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 22:58:18
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 22:58:18.412593   42483 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:58:18.412730   42483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:58:18.412740   42483 out.go:358] Setting ErrFile to fd 2...
	I1209 22:58:18.412745   42483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:58:18.412942   42483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 22:58:18.413530   42483 out.go:352] Setting JSON to false
	I1209 22:58:18.414481   42483 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6049,"bootTime":1733779049,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 22:58:18.414575   42483 start.go:139] virtualization: kvm guest
	I1209 22:58:18.417664   42483 out.go:177] * [ha-920193] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 22:58:18.419191   42483 notify.go:220] Checking for updates...
	I1209 22:58:18.419218   42483 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 22:58:18.420764   42483 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 22:58:18.422089   42483 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:58:18.423362   42483 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:58:18.424528   42483 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 22:58:18.425856   42483 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 22:58:18.427603   42483 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:58:18.427738   42483 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 22:58:18.428242   42483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:58:18.428294   42483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:58:18.443344   42483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33757
	I1209 22:58:18.443857   42483 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:58:18.444510   42483 main.go:141] libmachine: Using API Version  1
	I1209 22:58:18.444532   42483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:58:18.444895   42483 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:58:18.445090   42483 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:58:18.483684   42483 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 22:58:18.485008   42483 start.go:297] selected driver: kvm2
	I1209 22:58:18.485025   42483 start.go:901] validating driver "kvm2" against &{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.98 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false def
ault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:58:18.485160   42483 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 22:58:18.485509   42483 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:58:18.485596   42483 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 22:58:18.501855   42483 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 22:58:18.502684   42483 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 22:58:18.502726   42483 cni.go:84] Creating CNI manager for ""
	I1209 22:58:18.502766   42483 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1209 22:58:18.502831   42483 start.go:340] cluster config:
	{Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.98 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fal
se headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:58:18.502958   42483 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:58:18.504799   42483 out.go:177] * Starting "ha-920193" primary control-plane node in "ha-920193" cluster
	I1209 22:58:18.506097   42483 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:58:18.506135   42483 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 22:58:18.506143   42483 cache.go:56] Caching tarball of preloaded images
	I1209 22:58:18.506222   42483 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 22:58:18.506233   42483 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 22:58:18.506342   42483 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/config.json ...
	I1209 22:58:18.506583   42483 start.go:360] acquireMachinesLock for ha-920193: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 22:58:18.506627   42483 start.go:364] duration metric: took 25.012µs to acquireMachinesLock for "ha-920193"
	I1209 22:58:18.506650   42483 start.go:96] Skipping create...Using existing machine configuration
	I1209 22:58:18.506660   42483 fix.go:54] fixHost starting: 
	I1209 22:58:18.506912   42483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:58:18.506941   42483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:58:18.522186   42483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I1209 22:58:18.522661   42483 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:58:18.523105   42483 main.go:141] libmachine: Using API Version  1
	I1209 22:58:18.523130   42483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:58:18.523548   42483 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:58:18.523752   42483 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:58:18.523924   42483 main.go:141] libmachine: (ha-920193) Calling .GetState
	I1209 22:58:18.525512   42483 fix.go:112] recreateIfNeeded on ha-920193: state=Running err=<nil>
	W1209 22:58:18.525534   42483 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 22:58:18.527721   42483 out.go:177] * Updating the running kvm2 "ha-920193" VM ...
	I1209 22:58:18.529056   42483 machine.go:93] provisionDockerMachine start ...
	I1209 22:58:18.529097   42483 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:58:18.529278   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:58:18.531835   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.532261   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:58:18.532298   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.532425   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:58:18.532570   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:58:18.532709   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:58:18.532805   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:58:18.532952   42483 main.go:141] libmachine: Using SSH client type: native
	I1209 22:58:18.533186   42483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:58:18.533200   42483 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 22:58:18.632292   42483 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-920193
	
	I1209 22:58:18.632341   42483 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:58:18.632586   42483 buildroot.go:166] provisioning hostname "ha-920193"
	I1209 22:58:18.632608   42483 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:58:18.632808   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:58:18.635953   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.636377   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:58:18.636403   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.636585   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:58:18.636749   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:58:18.636929   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:58:18.637065   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:58:18.637237   42483 main.go:141] libmachine: Using SSH client type: native
	I1209 22:58:18.637402   42483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:58:18.637414   42483 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-920193 && echo "ha-920193" | sudo tee /etc/hostname
	I1209 22:58:18.766542   42483 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-920193
	
	I1209 22:58:18.766571   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:58:18.769061   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.769409   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:58:18.769436   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.769571   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:58:18.769738   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:58:18.769892   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:58:18.770019   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:58:18.770172   42483 main.go:141] libmachine: Using SSH client type: native
	I1209 22:58:18.770373   42483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:58:18.770390   42483 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-920193' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-920193/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-920193' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 22:58:18.872453   42483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 22:58:18.872489   42483 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 22:58:18.872508   42483 buildroot.go:174] setting up certificates
	I1209 22:58:18.872546   42483 provision.go:84] configureAuth start
	I1209 22:58:18.872558   42483 main.go:141] libmachine: (ha-920193) Calling .GetMachineName
	I1209 22:58:18.872816   42483 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:58:18.875181   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.875540   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:58:18.875584   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.875704   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:58:18.877692   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.878056   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:58:18.878088   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.878208   42483 provision.go:143] copyHostCerts
	I1209 22:58:18.878237   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:58:18.878274   42483 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 22:58:18.878285   42483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 22:58:18.878382   42483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 22:58:18.878464   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:58:18.878483   42483 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 22:58:18.878490   42483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 22:58:18.878515   42483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 22:58:18.878557   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:58:18.878573   42483 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 22:58:18.878580   42483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 22:58:18.878602   42483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 22:58:18.878654   42483 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.ha-920193 san=[127.0.0.1 192.168.39.102 ha-920193 localhost minikube]
	I1209 22:58:18.992132   42483 provision.go:177] copyRemoteCerts
	I1209 22:58:18.992195   42483 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 22:58:18.992217   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:58:18.994915   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.995289   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:58:18.995319   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:18.995510   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:58:18.995685   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:58:18.995880   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:58:18.996021   42483 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:58:19.073536   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 22:58:19.073637   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 22:58:19.097947   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 22:58:19.098031   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 22:58:19.122484   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 22:58:19.122548   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1209 22:58:19.147375   42483 provision.go:87] duration metric: took 274.816586ms to configureAuth
	I1209 22:58:19.147401   42483 buildroot.go:189] setting minikube options for container-runtime
	I1209 22:58:19.147609   42483 config.go:182] Loaded profile config "ha-920193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:58:19.147717   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:58:19.150326   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:19.150654   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:58:19.150674   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:58:19.150827   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:58:19.150987   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:58:19.151124   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:58:19.151272   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:58:19.151410   42483 main.go:141] libmachine: Using SSH client type: native
	I1209 22:58:19.151575   42483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:58:19.151595   42483 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 22:59:50.052506   42483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 22:59:50.052532   42483 machine.go:96] duration metric: took 1m31.523465247s to provisionDockerMachine
	I1209 22:59:50.052545   42483 start.go:293] postStartSetup for "ha-920193" (driver="kvm2")
	I1209 22:59:50.052555   42483 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 22:59:50.052570   42483 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:59:50.052918   42483 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 22:59:50.052943   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:59:50.056112   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.056556   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:59:50.056581   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.056714   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:59:50.056909   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:59:50.057045   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:59:50.057209   42483 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:59:50.133596   42483 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 22:59:50.137820   42483 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 22:59:50.137855   42483 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 22:59:50.137930   42483 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 22:59:50.138043   42483 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 22:59:50.138059   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /etc/ssl/certs/262532.pem
	I1209 22:59:50.138148   42483 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 22:59:50.147276   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:59:50.170622   42483 start.go:296] duration metric: took 118.063431ms for postStartSetup
	I1209 22:59:50.170667   42483 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:59:50.170964   42483 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1209 22:59:50.170989   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:59:50.173534   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.173893   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:59:50.173922   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.174063   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:59:50.174215   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:59:50.174444   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:59:50.174559   42483 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	W1209 22:59:50.253826   42483 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1209 22:59:50.253863   42483 fix.go:56] duration metric: took 1m31.747202014s for fixHost
	I1209 22:59:50.253885   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:59:50.256297   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.256584   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:59:50.256612   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.256748   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:59:50.256906   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:59:50.257060   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:59:50.257225   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:59:50.257388   42483 main.go:141] libmachine: Using SSH client type: native
	I1209 22:59:50.257549   42483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1209 22:59:50.257560   42483 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 22:59:50.352208   42483 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733785190.329555923
	
	I1209 22:59:50.352234   42483 fix.go:216] guest clock: 1733785190.329555923
	I1209 22:59:50.352243   42483 fix.go:229] Guest: 2024-12-09 22:59:50.329555923 +0000 UTC Remote: 2024-12-09 22:59:50.253871451 +0000 UTC m=+91.879428390 (delta=75.684472ms)
	I1209 22:59:50.352315   42483 fix.go:200] guest clock delta is within tolerance: 75.684472ms
	I1209 22:59:50.352327   42483 start.go:83] releasing machines lock for "ha-920193", held for 1m31.845689548s
	I1209 22:59:50.352354   42483 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:59:50.352648   42483 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:59:50.355316   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.355651   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:59:50.355685   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.355862   42483 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:59:50.356346   42483 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:59:50.356502   42483 main.go:141] libmachine: (ha-920193) Calling .DriverName
	I1209 22:59:50.356587   42483 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 22:59:50.356637   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:59:50.356673   42483 ssh_runner.go:195] Run: cat /version.json
	I1209 22:59:50.356698   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHHostname
	I1209 22:59:50.359459   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.359662   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.359900   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:59:50.359926   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.360059   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:59:50.360087   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:50.360093   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:59:50.360219   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHPort
	I1209 22:59:50.360286   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:59:50.360347   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHKeyPath
	I1209 22:59:50.360395   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:59:50.360445   42483 main.go:141] libmachine: (ha-920193) Calling .GetSSHUsername
	I1209 22:59:50.360497   42483 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:59:50.360527   42483 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/ha-920193/id_rsa Username:docker}
	I1209 22:59:50.432748   42483 ssh_runner.go:195] Run: systemctl --version
	I1209 22:59:50.455648   42483 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 22:59:50.610300   42483 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 22:59:50.616345   42483 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 22:59:50.616405   42483 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 22:59:50.624936   42483 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 22:59:50.624957   42483 start.go:495] detecting cgroup driver to use...
	I1209 22:59:50.625014   42483 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 22:59:50.642361   42483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 22:59:50.656242   42483 docker.go:217] disabling cri-docker service (if available) ...
	I1209 22:59:50.656308   42483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 22:59:50.670092   42483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 22:59:50.683516   42483 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 22:59:50.832435   42483 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 22:59:50.978376   42483 docker.go:233] disabling docker service ...
	I1209 22:59:50.978451   42483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 22:59:50.995396   42483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 22:59:51.009430   42483 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 22:59:51.161400   42483 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 22:59:51.306951   42483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 22:59:51.320869   42483 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 22:59:51.338909   42483 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 22:59:51.338966   42483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:59:51.349089   42483 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 22:59:51.349161   42483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:59:51.359353   42483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:59:51.369439   42483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:59:51.379616   42483 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 22:59:51.390640   42483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:59:51.400824   42483 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:59:51.411373   42483 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 22:59:51.421726   42483 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 22:59:51.430844   42483 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 22:59:51.440646   42483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:59:51.602831   42483 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 22:59:51.830633   42483 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 22:59:51.830694   42483 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 22:59:51.835756   42483 start.go:563] Will wait 60s for crictl version
	I1209 22:59:51.835807   42483 ssh_runner.go:195] Run: which crictl
	I1209 22:59:51.839353   42483 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 22:59:51.874837   42483 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 22:59:51.874916   42483 ssh_runner.go:195] Run: crio --version
	I1209 22:59:51.904570   42483 ssh_runner.go:195] Run: crio --version
	I1209 22:59:51.934503   42483 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 22:59:51.936210   42483 main.go:141] libmachine: (ha-920193) Calling .GetIP
	I1209 22:59:51.938717   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:51.939046   42483 main.go:141] libmachine: (ha-920193) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:3c:cb", ip: ""} in network mk-ha-920193: {Iface:virbr1 ExpiryTime:2024-12-09 23:49:17 +0000 UTC Type:0 Mac:52:54:00:eb:3c:cb Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-920193 Clientid:01:52:54:00:eb:3c:cb}
	I1209 22:59:51.939075   42483 main.go:141] libmachine: (ha-920193) DBG | domain ha-920193 has defined IP address 192.168.39.102 and MAC address 52:54:00:eb:3c:cb in network mk-ha-920193
	I1209 22:59:51.939254   42483 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 22:59:51.944059   42483 kubeadm.go:883] updating cluster {Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.98 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storag
eclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 22:59:51.944197   42483 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:59:51.944236   42483 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 22:59:51.993907   42483 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 22:59:51.993932   42483 crio.go:433] Images already preloaded, skipping extraction
	I1209 22:59:51.993998   42483 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 22:59:52.034846   42483 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 22:59:52.034874   42483 cache_images.go:84] Images are preloaded, skipping loading
	I1209 22:59:52.034885   42483 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.31.2 crio true true} ...
	I1209 22:59:52.035018   42483 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-920193 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 22:59:52.035104   42483 ssh_runner.go:195] Run: crio config
	I1209 22:59:52.081841   42483 cni.go:84] Creating CNI manager for ""
	I1209 22:59:52.081859   42483 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1209 22:59:52.081867   42483 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 22:59:52.081893   42483 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-920193 NodeName:ha-920193 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 22:59:52.082020   42483 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-920193"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.102"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 22:59:52.082047   42483 kube-vip.go:115] generating kube-vip config ...
	I1209 22:59:52.082086   42483 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 22:59:52.093583   42483 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 22:59:52.093689   42483 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 22:59:52.093748   42483 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 22:59:52.103254   42483 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 22:59:52.103314   42483 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1209 22:59:52.112847   42483 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1209 22:59:52.129331   42483 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 22:59:52.146516   42483 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1209 22:59:52.163522   42483 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 22:59:52.182237   42483 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 22:59:52.186084   42483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 22:59:52.336356   42483 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 22:59:52.351388   42483 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193 for IP: 192.168.39.102
	I1209 22:59:52.351412   42483 certs.go:194] generating shared ca certs ...
	I1209 22:59:52.351429   42483 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:59:52.351613   42483 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 22:59:52.351666   42483 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 22:59:52.351680   42483 certs.go:256] generating profile certs ...
	I1209 22:59:52.351790   42483 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/client.key
	I1209 22:59:52.351827   42483 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.d8ce0dbc
	I1209 22:59:52.351850   42483 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.d8ce0dbc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.43 192.168.39.45 192.168.39.254]
	I1209 22:59:52.428226   42483 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.d8ce0dbc ...
	I1209 22:59:52.428252   42483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.d8ce0dbc: {Name:mkc7bb7c7b8e01f95f235c9711f9c9c93e6e2550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:59:52.428415   42483 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.d8ce0dbc ...
	I1209 22:59:52.428427   42483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.d8ce0dbc: {Name:mka1046e76e982196ecdc2eb0d77c4a07b1dbe34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 22:59:52.428496   42483 certs.go:381] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt.d8ce0dbc -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt
	I1209 22:59:52.428644   42483 certs.go:385] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key.d8ce0dbc -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key
	I1209 22:59:52.428765   42483 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key
	I1209 22:59:52.428779   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 22:59:52.428791   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 22:59:52.428806   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 22:59:52.428821   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 22:59:52.428833   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 22:59:52.428852   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 22:59:52.428863   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 22:59:52.428876   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 22:59:52.428928   42483 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 22:59:52.428954   42483 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 22:59:52.428963   42483 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 22:59:52.428986   42483 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 22:59:52.429007   42483 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 22:59:52.429029   42483 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 22:59:52.429066   42483 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 22:59:52.429093   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem -> /usr/share/ca-certificates/26253.pem
	I1209 22:59:52.429106   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /usr/share/ca-certificates/262532.pem
	I1209 22:59:52.429118   42483 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:59:52.429669   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 22:59:52.454421   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 22:59:52.479165   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 22:59:52.502636   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 22:59:52.525999   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1209 22:59:52.549503   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 22:59:52.575002   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 22:59:52.598502   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/ha-920193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 22:59:52.621566   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 22:59:52.645812   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 22:59:52.668854   42483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 22:59:52.691401   42483 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 22:59:52.707802   42483 ssh_runner.go:195] Run: openssl version
	I1209 22:59:52.713695   42483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 22:59:52.724315   42483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 22:59:52.728590   42483 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 22:59:52.728652   42483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 22:59:52.733965   42483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 22:59:52.743892   42483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 22:59:52.754704   42483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 22:59:52.758861   42483 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 22:59:52.758904   42483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 22:59:52.764380   42483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 22:59:52.773686   42483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 22:59:52.785209   42483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:59:52.789697   42483 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:59:52.789745   42483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 22:59:52.795553   42483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 22:59:52.804829   42483 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 22:59:52.809130   42483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 22:59:52.814609   42483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 22:59:52.820098   42483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 22:59:52.825515   42483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 22:59:52.831089   42483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 22:59:52.836385   42483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 22:59:52.841696   42483 kubeadm.go:392] StartCluster: {Name:ha-920193 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-920193 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.45 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.98 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagecl
ass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:59:52.841808   42483 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 22:59:52.841857   42483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 22:59:52.880781   42483 cri.go:89] found id: "18d5433d683b5f983c0b88682825cbff38c24767c98a7a2a107a54d81f3949aa"
	I1209 22:59:52.880800   42483 cri.go:89] found id: "acf0698e169715e43d83cf421d42f9e817aca897ce21fdda7308883df102fe59"
	I1209 22:59:52.880804   42483 cri.go:89] found id: "9c35bdf23f46681c1892764f5d7c07785fcfe5e74369cb517dd4feb6dd774790"
	I1209 22:59:52.880807   42483 cri.go:89] found id: "14b80feac0f9a191338990153a34713a6598298a75c38fbc982040013507119c"
	I1209 22:59:52.880809   42483 cri.go:89] found id: "6bdcee2ff30bbad8e60b186d63310171483601dc78ab1abec8c66438bfd2f67a"
	I1209 22:59:52.880812   42483 cri.go:89] found id: "a6a62ed3f6ca851b9e241ceaebd0beefc01fd7dbe3845409d02a901b02942a75"
	I1209 22:59:52.880814   42483 cri.go:89] found id: "d26f562ad5527a828f195e865deac5813fd73efddb9d23600b4466684698607a"
	I1209 22:59:52.880816   42483 cri.go:89] found id: "233aa49869db437ba273e21a1500627e09ea9fe9d935876b60f8a3cfd8cee678"
	I1209 22:59:52.880819   42483 cri.go:89] found id: "b845a7a9380501aac2e0992c1b802cf4bc3bd5aea4f43d2d2d4f10c946c1c66f"
	I1209 22:59:52.880835   42483 cri.go:89] found id: "2c5a043b38715a340bb7662dc4c1de71af43980e810d30d77eb7a4f72e8ff581"
	I1209 22:59:52.880838   42483 cri.go:89] found id: "f0a29f1dc44e49c57d8cf3df8d8313d05b06311834a1e4a23d18654d0f96f18a"
	I1209 22:59:52.880841   42483 cri.go:89] found id: "b8197a166eeaf4ba5da428f69c30495d3145c0721c439d3db67e6e3d2c92b6b9"
	I1209 22:59:52.880843   42483 cri.go:89] found id: "6ee0fecee78f07effd047f92af879f08c80eb37beead97bb70f87eda39597963"
	I1209 22:59:52.880846   42483 cri.go:89] found id: ""
	I1209 22:59:52.880888   42483 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-920193 -n ha-920193
helpers_test.go:261: (dbg) Run:  kubectl --context ha-920193 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (319.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-555395
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-555395
E1209 23:20:26.332755   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-555395: exit status 82 (2m1.863705674s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-555395-m03"  ...
	* Stopping node "multinode-555395-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-555395" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-555395 --wait=true -v=8 --alsologtostderr
E1209 23:23:12.527585   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-555395 --wait=true -v=8 --alsologtostderr: (3m15.106535674s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-555395
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-555395 -n multinode-555395
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 logs -n 25
E1209 23:25:26.333205   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-555395 logs -n 25: (1.937468596s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-555395 ssh -n                                                                 | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-555395 cp multinode-555395-m02:/home/docker/cp-test.txt                       | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile813365822/001/cp-test_multinode-555395-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n                                                                 | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-555395 cp multinode-555395-m02:/home/docker/cp-test.txt                       | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395:/home/docker/cp-test_multinode-555395-m02_multinode-555395.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n                                                                 | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n multinode-555395 sudo cat                                       | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | /home/docker/cp-test_multinode-555395-m02_multinode-555395.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-555395 cp multinode-555395-m02:/home/docker/cp-test.txt                       | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m03:/home/docker/cp-test_multinode-555395-m02_multinode-555395-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n                                                                 | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n multinode-555395-m03 sudo cat                                   | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | /home/docker/cp-test_multinode-555395-m02_multinode-555395-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-555395 cp testdata/cp-test.txt                                                | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n                                                                 | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-555395 cp multinode-555395-m03:/home/docker/cp-test.txt                       | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile813365822/001/cp-test_multinode-555395-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n                                                                 | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-555395 cp multinode-555395-m03:/home/docker/cp-test.txt                       | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395:/home/docker/cp-test_multinode-555395-m03_multinode-555395.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n                                                                 | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n multinode-555395 sudo cat                                       | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | /home/docker/cp-test_multinode-555395-m03_multinode-555395.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-555395 cp multinode-555395-m03:/home/docker/cp-test.txt                       | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m02:/home/docker/cp-test_multinode-555395-m03_multinode-555395-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n                                                                 | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n multinode-555395-m02 sudo cat                                   | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | /home/docker/cp-test_multinode-555395-m03_multinode-555395-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-555395 node stop m03                                                          | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	| node    | multinode-555395 node start                                                             | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:20 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-555395                                                                | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC |                     |
	| stop    | -p multinode-555395                                                                     | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC |                     |
	| start   | -p multinode-555395                                                                     | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:22 UTC | 09 Dec 24 23:25 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-555395                                                                | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:25 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:22:10
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:22:10.739345   54859 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:22:10.739469   54859 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:22:10.739478   54859 out.go:358] Setting ErrFile to fd 2...
	I1209 23:22:10.739482   54859 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:22:10.739657   54859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 23:22:10.740172   54859 out.go:352] Setting JSON to false
	I1209 23:22:10.741060   54859 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7482,"bootTime":1733779049,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:22:10.741175   54859 start.go:139] virtualization: kvm guest
	I1209 23:22:10.743487   54859 out.go:177] * [multinode-555395] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:22:10.744832   54859 notify.go:220] Checking for updates...
	I1209 23:22:10.744875   54859 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:22:10.746521   54859 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:22:10.747932   54859 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:22:10.749136   54859 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 23:22:10.750183   54859 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:22:10.751360   54859 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:22:10.752894   54859 config.go:182] Loaded profile config "multinode-555395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:22:10.752970   54859 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:22:10.753396   54859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:22:10.753451   54859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:22:10.768491   54859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35673
	I1209 23:22:10.768940   54859 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:22:10.769517   54859 main.go:141] libmachine: Using API Version  1
	I1209 23:22:10.769536   54859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:22:10.769918   54859 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:22:10.770110   54859 main.go:141] libmachine: (multinode-555395) Calling .DriverName
	I1209 23:22:10.807017   54859 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 23:22:10.808160   54859 start.go:297] selected driver: kvm2
	I1209 23:22:10.808179   54859 start.go:901] validating driver "kvm2" against &{Name:multinode-555395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-555395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.99 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fa
lse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:22:10.808382   54859 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:22:10.808766   54859 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:22:10.808856   54859 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 23:22:10.825598   54859 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 23:22:10.826304   54859 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:22:10.826332   54859 cni.go:84] Creating CNI manager for ""
	I1209 23:22:10.826358   54859 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1209 23:22:10.826422   54859 start.go:340] cluster config:
	{Name:multinode-555395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-555395 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.99 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner
:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:22:10.826546   54859 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:22:10.829160   54859 out.go:177] * Starting "multinode-555395" primary control-plane node in "multinode-555395" cluster
	I1209 23:22:10.830539   54859 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:22:10.830583   54859 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 23:22:10.830591   54859 cache.go:56] Caching tarball of preloaded images
	I1209 23:22:10.830687   54859 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 23:22:10.830699   54859 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 23:22:10.830801   54859 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/config.json ...
	I1209 23:22:10.831041   54859 start.go:360] acquireMachinesLock for multinode-555395: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:22:10.831090   54859 start.go:364] duration metric: took 28.347µs to acquireMachinesLock for "multinode-555395"
	I1209 23:22:10.831107   54859 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:22:10.831112   54859 fix.go:54] fixHost starting: 
	I1209 23:22:10.831370   54859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:22:10.831402   54859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:22:10.846275   54859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34025
	I1209 23:22:10.846644   54859 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:22:10.847074   54859 main.go:141] libmachine: Using API Version  1
	I1209 23:22:10.847096   54859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:22:10.847422   54859 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:22:10.847593   54859 main.go:141] libmachine: (multinode-555395) Calling .DriverName
	I1209 23:22:10.847745   54859 main.go:141] libmachine: (multinode-555395) Calling .GetState
	I1209 23:22:10.849390   54859 fix.go:112] recreateIfNeeded on multinode-555395: state=Running err=<nil>
	W1209 23:22:10.849421   54859 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:22:10.851947   54859 out.go:177] * Updating the running kvm2 "multinode-555395" VM ...
	I1209 23:22:10.853211   54859 machine.go:93] provisionDockerMachine start ...
	I1209 23:22:10.853231   54859 main.go:141] libmachine: (multinode-555395) Calling .DriverName
	I1209 23:22:10.853420   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:22:10.855784   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:10.856173   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:22:10.856196   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:10.856353   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHPort
	I1209 23:22:10.856492   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:22:10.856645   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:22:10.856786   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHUsername
	I1209 23:22:10.856950   54859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:22:10.857160   54859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1209 23:22:10.857173   54859 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:22:10.967579   54859 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-555395
	
	I1209 23:22:10.967606   54859 main.go:141] libmachine: (multinode-555395) Calling .GetMachineName
	I1209 23:22:10.967874   54859 buildroot.go:166] provisioning hostname "multinode-555395"
	I1209 23:22:10.967923   54859 main.go:141] libmachine: (multinode-555395) Calling .GetMachineName
	I1209 23:22:10.968098   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:22:10.970425   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:10.970853   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:22:10.970879   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:10.971069   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHPort
	I1209 23:22:10.971268   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:22:10.971441   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:22:10.971613   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHUsername
	I1209 23:22:10.971835   54859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:22:10.972023   54859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1209 23:22:10.972041   54859 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-555395 && echo "multinode-555395" | sudo tee /etc/hostname
	I1209 23:22:11.096531   54859 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-555395
	
	I1209 23:22:11.096565   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:22:11.099454   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.099783   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:22:11.099814   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.099994   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHPort
	I1209 23:22:11.100160   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:22:11.100324   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:22:11.100468   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHUsername
	I1209 23:22:11.100663   54859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:22:11.100867   54859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1209 23:22:11.100885   54859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-555395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-555395/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-555395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:22:11.216936   54859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:22:11.216969   54859 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:22:11.217022   54859 buildroot.go:174] setting up certificates
	I1209 23:22:11.217032   54859 provision.go:84] configureAuth start
	I1209 23:22:11.217044   54859 main.go:141] libmachine: (multinode-555395) Calling .GetMachineName
	I1209 23:22:11.217346   54859 main.go:141] libmachine: (multinode-555395) Calling .GetIP
	I1209 23:22:11.220474   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.220890   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:22:11.220921   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.221096   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:22:11.223284   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.223624   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:22:11.223663   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.223825   54859 provision.go:143] copyHostCerts
	I1209 23:22:11.223888   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:22:11.223924   54859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:22:11.223937   54859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:22:11.224010   54859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:22:11.224088   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:22:11.224109   54859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:22:11.224118   54859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:22:11.224147   54859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:22:11.224202   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:22:11.224221   54859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:22:11.224228   54859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:22:11.224253   54859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:22:11.224329   54859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.multinode-555395 san=[127.0.0.1 192.168.39.48 localhost minikube multinode-555395]
	I1209 23:22:11.414438   54859 provision.go:177] copyRemoteCerts
	I1209 23:22:11.414512   54859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:22:11.414548   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:22:11.417124   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.417462   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:22:11.417497   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.417689   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHPort
	I1209 23:22:11.417848   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:22:11.418006   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHUsername
	I1209 23:22:11.418149   54859 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/multinode-555395/id_rsa Username:docker}
	I1209 23:22:11.502013   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 23:22:11.502083   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:22:11.526110   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 23:22:11.526185   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1209 23:22:11.551371   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 23:22:11.551450   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:22:11.576807   54859 provision.go:87] duration metric: took 359.762529ms to configureAuth
	I1209 23:22:11.576838   54859 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:22:11.577063   54859 config.go:182] Loaded profile config "multinode-555395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:22:11.577132   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:22:11.579900   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.580273   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:22:11.580306   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.580477   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHPort
	I1209 23:22:11.580677   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:22:11.580825   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:22:11.580983   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHUsername
	I1209 23:22:11.581131   54859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:22:11.581358   54859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1209 23:22:11.581381   54859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:23:42.286375   54859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:23:42.286403   54859 machine.go:96] duration metric: took 1m31.433176078s to provisionDockerMachine
	I1209 23:23:42.286422   54859 start.go:293] postStartSetup for "multinode-555395" (driver="kvm2")
	I1209 23:23:42.286437   54859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:23:42.286467   54859 main.go:141] libmachine: (multinode-555395) Calling .DriverName
	I1209 23:23:42.286876   54859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:23:42.286913   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:23:42.290177   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.290710   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:23:42.290734   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.290929   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHPort
	I1209 23:23:42.291089   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:23:42.291245   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHUsername
	I1209 23:23:42.291394   54859 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/multinode-555395/id_rsa Username:docker}
	I1209 23:23:42.378120   54859 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:23:42.382199   54859 command_runner.go:130] > NAME=Buildroot
	I1209 23:23:42.382218   54859 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1209 23:23:42.382222   54859 command_runner.go:130] > ID=buildroot
	I1209 23:23:42.382227   54859 command_runner.go:130] > VERSION_ID=2023.02.9
	I1209 23:23:42.382232   54859 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1209 23:23:42.382425   54859 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:23:42.382449   54859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:23:42.382536   54859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:23:42.382645   54859 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:23:42.382657   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /etc/ssl/certs/262532.pem
	I1209 23:23:42.382758   54859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:23:42.392008   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:23:42.416175   54859 start.go:296] duration metric: took 129.7373ms for postStartSetup
	I1209 23:23:42.416223   54859 fix.go:56] duration metric: took 1m31.585109382s for fixHost
	I1209 23:23:42.416262   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:23:42.418856   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.419201   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:23:42.419232   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.419407   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHPort
	I1209 23:23:42.419598   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:23:42.419768   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:23:42.419887   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHUsername
	I1209 23:23:42.420017   54859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:23:42.420191   54859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1209 23:23:42.420204   54859 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:23:42.528002   54859 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733786622.507933675
	
	I1209 23:23:42.528027   54859 fix.go:216] guest clock: 1733786622.507933675
	I1209 23:23:42.528035   54859 fix.go:229] Guest: 2024-12-09 23:23:42.507933675 +0000 UTC Remote: 2024-12-09 23:23:42.416228751 +0000 UTC m=+91.713477283 (delta=91.704924ms)
	I1209 23:23:42.528051   54859 fix.go:200] guest clock delta is within tolerance: 91.704924ms
	I1209 23:23:42.528056   54859 start.go:83] releasing machines lock for "multinode-555395", held for 1m31.696956201s
	I1209 23:23:42.528077   54859 main.go:141] libmachine: (multinode-555395) Calling .DriverName
	I1209 23:23:42.528327   54859 main.go:141] libmachine: (multinode-555395) Calling .GetIP
	I1209 23:23:42.530966   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.531345   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:23:42.531385   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.531551   54859 main.go:141] libmachine: (multinode-555395) Calling .DriverName
	I1209 23:23:42.532059   54859 main.go:141] libmachine: (multinode-555395) Calling .DriverName
	I1209 23:23:42.532245   54859 main.go:141] libmachine: (multinode-555395) Calling .DriverName
	I1209 23:23:42.532325   54859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:23:42.532387   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:23:42.532434   54859 ssh_runner.go:195] Run: cat /version.json
	I1209 23:23:42.532460   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:23:42.534695   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.534993   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.535026   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:23:42.535056   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.535261   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHPort
	I1209 23:23:42.535296   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:23:42.535322   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.535421   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:23:42.535486   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHPort
	I1209 23:23:42.535589   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHUsername
	I1209 23:23:42.535655   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:23:42.535713   54859 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/multinode-555395/id_rsa Username:docker}
	I1209 23:23:42.535764   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHUsername
	I1209 23:23:42.535899   54859 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/multinode-555395/id_rsa Username:docker}
	I1209 23:23:42.625360   54859 command_runner.go:130] > {"iso_version": "v1.34.0-1730913550-19917", "kicbase_version": "v0.0.45-1730888964-19917", "minikube_version": "v1.34.0", "commit": "72f43dde5d92c8ae490d0727dad53fb3ed6aa41e"}
	I1209 23:23:42.643189   54859 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1209 23:23:42.644144   54859 ssh_runner.go:195] Run: systemctl --version
	I1209 23:23:42.650685   54859 command_runner.go:130] > systemd 252 (252)
	I1209 23:23:42.650717   54859 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1209 23:23:42.650773   54859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:23:42.804627   54859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 23:23:42.810076   54859 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1209 23:23:42.810432   54859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:23:42.810488   54859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:23:42.819486   54859 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 23:23:42.819509   54859 start.go:495] detecting cgroup driver to use...
	I1209 23:23:42.819574   54859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:23:42.835638   54859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:23:42.848550   54859 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:23:42.848610   54859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:23:42.861490   54859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:23:42.874313   54859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:23:43.023874   54859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:23:43.155667   54859 docker.go:233] disabling docker service ...
	I1209 23:23:43.155730   54859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:23:43.172225   54859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:23:43.185784   54859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:23:43.325351   54859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:23:43.468866   54859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:23:43.482453   54859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:23:43.500588   54859 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1209 23:23:43.500631   54859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:23:43.500676   54859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:23:43.510359   54859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:23:43.510423   54859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:23:43.520075   54859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:23:43.530173   54859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:23:43.539664   54859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:23:43.549511   54859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:23:43.559519   54859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:23:43.569908   54859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:23:43.579500   54859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:23:43.588180   54859 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1209 23:23:43.588246   54859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:23:43.596900   54859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:23:43.733673   54859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:23:43.924528   54859 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:23:43.924592   54859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:23:43.929277   54859 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1209 23:23:43.929310   54859 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1209 23:23:43.929317   54859 command_runner.go:130] > Device: 0,22	Inode: 1298        Links: 1
	I1209 23:23:43.929324   54859 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 23:23:43.929328   54859 command_runner.go:130] > Access: 2024-12-09 23:23:43.799670877 +0000
	I1209 23:23:43.929334   54859 command_runner.go:130] > Modify: 2024-12-09 23:23:43.799670877 +0000
	I1209 23:23:43.929339   54859 command_runner.go:130] > Change: 2024-12-09 23:23:43.799670877 +0000
	I1209 23:23:43.929342   54859 command_runner.go:130] >  Birth: -
	I1209 23:23:43.929452   54859 start.go:563] Will wait 60s for crictl version
	I1209 23:23:43.929522   54859 ssh_runner.go:195] Run: which crictl
	I1209 23:23:43.933004   54859 command_runner.go:130] > /usr/bin/crictl
	I1209 23:23:43.933078   54859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:23:43.968243   54859 command_runner.go:130] > Version:  0.1.0
	I1209 23:23:43.968273   54859 command_runner.go:130] > RuntimeName:  cri-o
	I1209 23:23:43.968281   54859 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1209 23:23:43.968289   54859 command_runner.go:130] > RuntimeApiVersion:  v1
	I1209 23:23:43.968317   54859 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:23:43.968377   54859 ssh_runner.go:195] Run: crio --version
	I1209 23:23:43.994880   54859 command_runner.go:130] > crio version 1.29.1
	I1209 23:23:43.994907   54859 command_runner.go:130] > Version:        1.29.1
	I1209 23:23:43.994914   54859 command_runner.go:130] > GitCommit:      unknown
	I1209 23:23:43.994918   54859 command_runner.go:130] > GitCommitDate:  unknown
	I1209 23:23:43.994922   54859 command_runner.go:130] > GitTreeState:   clean
	I1209 23:23:43.994927   54859 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1209 23:23:43.994931   54859 command_runner.go:130] > GoVersion:      go1.21.6
	I1209 23:23:43.994935   54859 command_runner.go:130] > Compiler:       gc
	I1209 23:23:43.994940   54859 command_runner.go:130] > Platform:       linux/amd64
	I1209 23:23:43.994944   54859 command_runner.go:130] > Linkmode:       dynamic
	I1209 23:23:43.994948   54859 command_runner.go:130] > BuildTags:      
	I1209 23:23:43.994953   54859 command_runner.go:130] >   containers_image_ostree_stub
	I1209 23:23:43.994957   54859 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1209 23:23:43.994964   54859 command_runner.go:130] >   btrfs_noversion
	I1209 23:23:43.994968   54859 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1209 23:23:43.994975   54859 command_runner.go:130] >   libdm_no_deferred_remove
	I1209 23:23:43.994983   54859 command_runner.go:130] >   seccomp
	I1209 23:23:43.994990   54859 command_runner.go:130] > LDFlags:          unknown
	I1209 23:23:43.994995   54859 command_runner.go:130] > SeccompEnabled:   true
	I1209 23:23:43.995001   54859 command_runner.go:130] > AppArmorEnabled:  false
	I1209 23:23:43.996054   54859 ssh_runner.go:195] Run: crio --version
	I1209 23:23:44.022250   54859 command_runner.go:130] > crio version 1.29.1
	I1209 23:23:44.022270   54859 command_runner.go:130] > Version:        1.29.1
	I1209 23:23:44.022275   54859 command_runner.go:130] > GitCommit:      unknown
	I1209 23:23:44.022279   54859 command_runner.go:130] > GitCommitDate:  unknown
	I1209 23:23:44.022289   54859 command_runner.go:130] > GitTreeState:   clean
	I1209 23:23:44.022295   54859 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1209 23:23:44.022300   54859 command_runner.go:130] > GoVersion:      go1.21.6
	I1209 23:23:44.022304   54859 command_runner.go:130] > Compiler:       gc
	I1209 23:23:44.022308   54859 command_runner.go:130] > Platform:       linux/amd64
	I1209 23:23:44.022312   54859 command_runner.go:130] > Linkmode:       dynamic
	I1209 23:23:44.022317   54859 command_runner.go:130] > BuildTags:      
	I1209 23:23:44.022321   54859 command_runner.go:130] >   containers_image_ostree_stub
	I1209 23:23:44.022324   54859 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1209 23:23:44.022328   54859 command_runner.go:130] >   btrfs_noversion
	I1209 23:23:44.022332   54859 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1209 23:23:44.022337   54859 command_runner.go:130] >   libdm_no_deferred_remove
	I1209 23:23:44.022340   54859 command_runner.go:130] >   seccomp
	I1209 23:23:44.022347   54859 command_runner.go:130] > LDFlags:          unknown
	I1209 23:23:44.022351   54859 command_runner.go:130] > SeccompEnabled:   true
	I1209 23:23:44.022355   54859 command_runner.go:130] > AppArmorEnabled:  false
	I1209 23:23:44.026258   54859 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:23:44.027595   54859 main.go:141] libmachine: (multinode-555395) Calling .GetIP
	I1209 23:23:44.030187   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:44.030632   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:23:44.030662   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:44.030825   54859 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 23:23:44.034842   54859 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1209 23:23:44.034938   54859 kubeadm.go:883] updating cluster {Name:multinode-555395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-555395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.99 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:23:44.035082   54859 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:23:44.035140   54859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:23:44.073020   54859 command_runner.go:130] > {
	I1209 23:23:44.073041   54859 command_runner.go:130] >   "images": [
	I1209 23:23:44.073048   54859 command_runner.go:130] >     {
	I1209 23:23:44.073058   54859 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1209 23:23:44.073062   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073068   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1209 23:23:44.073071   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073075   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073083   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1209 23:23:44.073090   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1209 23:23:44.073094   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073098   54859 command_runner.go:130] >       "size": "94965812",
	I1209 23:23:44.073103   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.073107   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.073117   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073124   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073128   54859 command_runner.go:130] >     },
	I1209 23:23:44.073134   54859 command_runner.go:130] >     {
	I1209 23:23:44.073140   54859 command_runner.go:130] >       "id": "50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e",
	I1209 23:23:44.073147   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073152   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241108-5c6d2daf"
	I1209 23:23:44.073158   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073162   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073171   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3",
	I1209 23:23:44.073178   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"
	I1209 23:23:44.073184   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073188   54859 command_runner.go:130] >       "size": "94963761",
	I1209 23:23:44.073194   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.073201   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.073207   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073211   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073217   54859 command_runner.go:130] >     },
	I1209 23:23:44.073220   54859 command_runner.go:130] >     {
	I1209 23:23:44.073226   54859 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1209 23:23:44.073230   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073235   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1209 23:23:44.073242   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073246   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073255   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1209 23:23:44.073265   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1209 23:23:44.073268   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073274   54859 command_runner.go:130] >       "size": "1363676",
	I1209 23:23:44.073278   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.073284   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.073289   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073295   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073299   54859 command_runner.go:130] >     },
	I1209 23:23:44.073303   54859 command_runner.go:130] >     {
	I1209 23:23:44.073309   54859 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1209 23:23:44.073315   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073321   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 23:23:44.073334   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073338   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073348   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1209 23:23:44.073361   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1209 23:23:44.073367   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073372   54859 command_runner.go:130] >       "size": "31470524",
	I1209 23:23:44.073378   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.073382   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.073388   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073392   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073398   54859 command_runner.go:130] >     },
	I1209 23:23:44.073402   54859 command_runner.go:130] >     {
	I1209 23:23:44.073411   54859 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1209 23:23:44.073417   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073422   54859 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1209 23:23:44.073428   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073433   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073445   54859 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1209 23:23:44.073455   54859 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1209 23:23:44.073461   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073466   54859 command_runner.go:130] >       "size": "63273227",
	I1209 23:23:44.073472   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.073477   54859 command_runner.go:130] >       "username": "nonroot",
	I1209 23:23:44.073483   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073487   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073493   54859 command_runner.go:130] >     },
	I1209 23:23:44.073496   54859 command_runner.go:130] >     {
	I1209 23:23:44.073505   54859 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1209 23:23:44.073511   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073516   54859 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1209 23:23:44.073522   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073526   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073534   54859 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1209 23:23:44.073543   54859 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1209 23:23:44.073549   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073553   54859 command_runner.go:130] >       "size": "149009664",
	I1209 23:23:44.073560   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.073564   54859 command_runner.go:130] >         "value": "0"
	I1209 23:23:44.073569   54859 command_runner.go:130] >       },
	I1209 23:23:44.073573   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.073576   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073598   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073607   54859 command_runner.go:130] >     },
	I1209 23:23:44.073610   54859 command_runner.go:130] >     {
	I1209 23:23:44.073616   54859 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1209 23:23:44.073623   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073629   54859 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1209 23:23:44.073635   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073639   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073648   54859 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1209 23:23:44.073658   54859 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1209 23:23:44.073665   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073669   54859 command_runner.go:130] >       "size": "95274464",
	I1209 23:23:44.073675   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.073679   54859 command_runner.go:130] >         "value": "0"
	I1209 23:23:44.073684   54859 command_runner.go:130] >       },
	I1209 23:23:44.073690   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.073693   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073701   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073704   54859 command_runner.go:130] >     },
	I1209 23:23:44.073710   54859 command_runner.go:130] >     {
	I1209 23:23:44.073716   54859 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1209 23:23:44.073723   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073728   54859 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1209 23:23:44.073733   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073737   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073753   54859 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1209 23:23:44.073763   54859 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1209 23:23:44.073769   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073774   54859 command_runner.go:130] >       "size": "89474374",
	I1209 23:23:44.073780   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.073784   54859 command_runner.go:130] >         "value": "0"
	I1209 23:23:44.073790   54859 command_runner.go:130] >       },
	I1209 23:23:44.073794   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.073801   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073805   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073808   54859 command_runner.go:130] >     },
	I1209 23:23:44.073811   54859 command_runner.go:130] >     {
	I1209 23:23:44.073817   54859 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1209 23:23:44.073821   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073825   54859 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1209 23:23:44.073828   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073832   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073839   54859 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1209 23:23:44.073846   54859 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1209 23:23:44.073849   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073853   54859 command_runner.go:130] >       "size": "92783513",
	I1209 23:23:44.073857   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.073864   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.073868   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073874   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073877   54859 command_runner.go:130] >     },
	I1209 23:23:44.073881   54859 command_runner.go:130] >     {
	I1209 23:23:44.073887   54859 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1209 23:23:44.073894   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073898   54859 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1209 23:23:44.073904   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073908   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073917   54859 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1209 23:23:44.073926   54859 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1209 23:23:44.073932   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073936   54859 command_runner.go:130] >       "size": "68457798",
	I1209 23:23:44.073941   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.073945   54859 command_runner.go:130] >         "value": "0"
	I1209 23:23:44.073949   54859 command_runner.go:130] >       },
	I1209 23:23:44.073954   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.073958   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073965   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073968   54859 command_runner.go:130] >     },
	I1209 23:23:44.073974   54859 command_runner.go:130] >     {
	I1209 23:23:44.073979   54859 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1209 23:23:44.073985   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073990   54859 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1209 23:23:44.073995   54859 command_runner.go:130] >       ],
	I1209 23:23:44.074000   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.074009   54859 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1209 23:23:44.074018   54859 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1209 23:23:44.074025   54859 command_runner.go:130] >       ],
	I1209 23:23:44.074029   54859 command_runner.go:130] >       "size": "742080",
	I1209 23:23:44.074035   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.074039   54859 command_runner.go:130] >         "value": "65535"
	I1209 23:23:44.074045   54859 command_runner.go:130] >       },
	I1209 23:23:44.074049   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.074055   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.074059   54859 command_runner.go:130] >       "pinned": true
	I1209 23:23:44.074064   54859 command_runner.go:130] >     }
	I1209 23:23:44.074067   54859 command_runner.go:130] >   ]
	I1209 23:23:44.074071   54859 command_runner.go:130] > }
	I1209 23:23:44.074747   54859 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:23:44.074765   54859 crio.go:433] Images already preloaded, skipping extraction
	I1209 23:23:44.074834   54859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:23:44.104419   54859 command_runner.go:130] > {
	I1209 23:23:44.104438   54859 command_runner.go:130] >   "images": [
	I1209 23:23:44.104444   54859 command_runner.go:130] >     {
	I1209 23:23:44.104452   54859 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1209 23:23:44.104456   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.104461   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1209 23:23:44.104465   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104469   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.104477   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1209 23:23:44.104483   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1209 23:23:44.104487   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104492   54859 command_runner.go:130] >       "size": "94965812",
	I1209 23:23:44.104496   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.104500   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.104504   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.104511   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.104515   54859 command_runner.go:130] >     },
	I1209 23:23:44.104519   54859 command_runner.go:130] >     {
	I1209 23:23:44.104524   54859 command_runner.go:130] >       "id": "50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e",
	I1209 23:23:44.104528   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.104534   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241108-5c6d2daf"
	I1209 23:23:44.104537   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104541   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.104548   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3",
	I1209 23:23:44.104557   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"
	I1209 23:23:44.104561   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104565   54859 command_runner.go:130] >       "size": "94963761",
	I1209 23:23:44.104568   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.104574   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.104578   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.104582   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.104584   54859 command_runner.go:130] >     },
	I1209 23:23:44.104588   54859 command_runner.go:130] >     {
	I1209 23:23:44.104593   54859 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1209 23:23:44.104597   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.104603   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1209 23:23:44.104606   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104613   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.104620   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1209 23:23:44.104627   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1209 23:23:44.104633   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104636   54859 command_runner.go:130] >       "size": "1363676",
	I1209 23:23:44.104640   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.104645   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.104658   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.104665   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.104668   54859 command_runner.go:130] >     },
	I1209 23:23:44.104671   54859 command_runner.go:130] >     {
	I1209 23:23:44.104678   54859 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1209 23:23:44.104681   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.104688   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 23:23:44.104692   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104696   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.104705   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1209 23:23:44.104718   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1209 23:23:44.104724   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104728   54859 command_runner.go:130] >       "size": "31470524",
	I1209 23:23:44.104732   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.104738   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.104743   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.104749   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.104752   54859 command_runner.go:130] >     },
	I1209 23:23:44.104758   54859 command_runner.go:130] >     {
	I1209 23:23:44.104763   54859 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1209 23:23:44.104770   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.104775   54859 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1209 23:23:44.104781   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104785   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.104795   54859 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1209 23:23:44.104804   54859 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1209 23:23:44.104810   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104814   54859 command_runner.go:130] >       "size": "63273227",
	I1209 23:23:44.104820   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.104824   54859 command_runner.go:130] >       "username": "nonroot",
	I1209 23:23:44.104828   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.104834   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.104837   54859 command_runner.go:130] >     },
	I1209 23:23:44.104842   54859 command_runner.go:130] >     {
	I1209 23:23:44.104848   54859 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1209 23:23:44.104854   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.104859   54859 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1209 23:23:44.104865   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104868   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.104877   54859 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1209 23:23:44.104886   54859 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1209 23:23:44.104892   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104897   54859 command_runner.go:130] >       "size": "149009664",
	I1209 23:23:44.104903   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.104907   54859 command_runner.go:130] >         "value": "0"
	I1209 23:23:44.104916   54859 command_runner.go:130] >       },
	I1209 23:23:44.104922   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.104926   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.104930   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.104934   54859 command_runner.go:130] >     },
	I1209 23:23:44.104937   54859 command_runner.go:130] >     {
	I1209 23:23:44.104945   54859 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1209 23:23:44.104952   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.104956   54859 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1209 23:23:44.104962   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104967   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.104976   54859 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1209 23:23:44.104986   54859 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1209 23:23:44.104991   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104996   54859 command_runner.go:130] >       "size": "95274464",
	I1209 23:23:44.105002   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.105006   54859 command_runner.go:130] >         "value": "0"
	I1209 23:23:44.105012   54859 command_runner.go:130] >       },
	I1209 23:23:44.105015   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.105022   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.105026   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.105029   54859 command_runner.go:130] >     },
	I1209 23:23:44.105032   54859 command_runner.go:130] >     {
	I1209 23:23:44.105038   54859 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1209 23:23:44.105044   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.105049   54859 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1209 23:23:44.105055   54859 command_runner.go:130] >       ],
	I1209 23:23:44.105059   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.105075   54859 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1209 23:23:44.105086   54859 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1209 23:23:44.105092   54859 command_runner.go:130] >       ],
	I1209 23:23:44.105097   54859 command_runner.go:130] >       "size": "89474374",
	I1209 23:23:44.105104   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.105107   54859 command_runner.go:130] >         "value": "0"
	I1209 23:23:44.105113   54859 command_runner.go:130] >       },
	I1209 23:23:44.105118   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.105124   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.105127   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.105133   54859 command_runner.go:130] >     },
	I1209 23:23:44.105137   54859 command_runner.go:130] >     {
	I1209 23:23:44.105143   54859 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1209 23:23:44.105149   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.105153   54859 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1209 23:23:44.105159   54859 command_runner.go:130] >       ],
	I1209 23:23:44.105163   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.105173   54859 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1209 23:23:44.105185   54859 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1209 23:23:44.105191   54859 command_runner.go:130] >       ],
	I1209 23:23:44.105195   54859 command_runner.go:130] >       "size": "92783513",
	I1209 23:23:44.105201   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.105204   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.105209   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.105213   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.105218   54859 command_runner.go:130] >     },
	I1209 23:23:44.105221   54859 command_runner.go:130] >     {
	I1209 23:23:44.105229   54859 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1209 23:23:44.105235   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.105239   54859 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1209 23:23:44.105245   54859 command_runner.go:130] >       ],
	I1209 23:23:44.105249   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.105258   54859 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1209 23:23:44.105267   54859 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1209 23:23:44.105273   54859 command_runner.go:130] >       ],
	I1209 23:23:44.105277   54859 command_runner.go:130] >       "size": "68457798",
	I1209 23:23:44.105283   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.105287   54859 command_runner.go:130] >         "value": "0"
	I1209 23:23:44.105293   54859 command_runner.go:130] >       },
	I1209 23:23:44.105297   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.105303   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.105307   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.105312   54859 command_runner.go:130] >     },
	I1209 23:23:44.105316   54859 command_runner.go:130] >     {
	I1209 23:23:44.105328   54859 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1209 23:23:44.105334   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.105339   54859 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1209 23:23:44.105344   54859 command_runner.go:130] >       ],
	I1209 23:23:44.105349   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.105358   54859 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1209 23:23:44.105367   54859 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1209 23:23:44.105373   54859 command_runner.go:130] >       ],
	I1209 23:23:44.105377   54859 command_runner.go:130] >       "size": "742080",
	I1209 23:23:44.105383   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.105387   54859 command_runner.go:130] >         "value": "65535"
	I1209 23:23:44.105391   54859 command_runner.go:130] >       },
	I1209 23:23:44.105395   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.105400   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.105404   54859 command_runner.go:130] >       "pinned": true
	I1209 23:23:44.105410   54859 command_runner.go:130] >     }
	I1209 23:23:44.105415   54859 command_runner.go:130] >   ]
	I1209 23:23:44.105420   54859 command_runner.go:130] > }
	I1209 23:23:44.105966   54859 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:23:44.105981   54859 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:23:44.105988   54859 kubeadm.go:934] updating node { 192.168.39.48 8443 v1.31.2 crio true true} ...
	I1209 23:23:44.106075   54859 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-555395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-555395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:23:44.106143   54859 ssh_runner.go:195] Run: crio config
	I1209 23:23:44.146374   54859 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1209 23:23:44.146398   54859 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1209 23:23:44.146404   54859 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1209 23:23:44.146407   54859 command_runner.go:130] > #
	I1209 23:23:44.146414   54859 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1209 23:23:44.146422   54859 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1209 23:23:44.146429   54859 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1209 23:23:44.146437   54859 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1209 23:23:44.146441   54859 command_runner.go:130] > # reload'.
	I1209 23:23:44.146447   54859 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1209 23:23:44.146453   54859 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1209 23:23:44.146459   54859 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1209 23:23:44.146465   54859 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1209 23:23:44.146469   54859 command_runner.go:130] > [crio]
	I1209 23:23:44.146475   54859 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1209 23:23:44.146482   54859 command_runner.go:130] > # containers images, in this directory.
	I1209 23:23:44.146626   54859 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1209 23:23:44.146656   54859 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1209 23:23:44.146665   54859 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1209 23:23:44.146678   54859 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1209 23:23:44.146687   54859 command_runner.go:130] > # imagestore = ""
	I1209 23:23:44.146699   54859 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1209 23:23:44.146714   54859 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1209 23:23:44.146724   54859 command_runner.go:130] > storage_driver = "overlay"
	I1209 23:23:44.146738   54859 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1209 23:23:44.146749   54859 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1209 23:23:44.146761   54859 command_runner.go:130] > storage_option = [
	I1209 23:23:44.146769   54859 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1209 23:23:44.146776   54859 command_runner.go:130] > ]
	I1209 23:23:44.146788   54859 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1209 23:23:44.146820   54859 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1209 23:23:44.146833   54859 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1209 23:23:44.146843   54859 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1209 23:23:44.146858   54859 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1209 23:23:44.146867   54859 command_runner.go:130] > # always happen on a node reboot
	I1209 23:23:44.146881   54859 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1209 23:23:44.146901   54859 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1209 23:23:44.146915   54859 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1209 23:23:44.146928   54859 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1209 23:23:44.146938   54859 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1209 23:23:44.146956   54859 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1209 23:23:44.146968   54859 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1209 23:23:44.146978   54859 command_runner.go:130] > # internal_wipe = true
	I1209 23:23:44.146990   54859 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1209 23:23:44.147004   54859 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1209 23:23:44.147015   54859 command_runner.go:130] > # internal_repair = false
	I1209 23:23:44.147026   54859 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1209 23:23:44.147040   54859 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1209 23:23:44.147051   54859 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1209 23:23:44.147058   54859 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1209 23:23:44.147065   54859 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1209 23:23:44.147070   54859 command_runner.go:130] > [crio.api]
	I1209 23:23:44.147076   54859 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1209 23:23:44.147080   54859 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1209 23:23:44.147087   54859 command_runner.go:130] > # IP address on which the stream server will listen.
	I1209 23:23:44.147092   54859 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1209 23:23:44.147099   54859 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1209 23:23:44.147106   54859 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1209 23:23:44.147111   54859 command_runner.go:130] > # stream_port = "0"
	I1209 23:23:44.147118   54859 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1209 23:23:44.147123   54859 command_runner.go:130] > # stream_enable_tls = false
	I1209 23:23:44.147131   54859 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1209 23:23:44.147136   54859 command_runner.go:130] > # stream_idle_timeout = ""
	I1209 23:23:44.147144   54859 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1209 23:23:44.147158   54859 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1209 23:23:44.147169   54859 command_runner.go:130] > # minutes.
	I1209 23:23:44.147177   54859 command_runner.go:130] > # stream_tls_cert = ""
	I1209 23:23:44.147194   54859 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1209 23:23:44.147209   54859 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1209 23:23:44.147220   54859 command_runner.go:130] > # stream_tls_key = ""
	I1209 23:23:44.147235   54859 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1209 23:23:44.147250   54859 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1209 23:23:44.147267   54859 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1209 23:23:44.147279   54859 command_runner.go:130] > # stream_tls_ca = ""
	I1209 23:23:44.147291   54859 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1209 23:23:44.147298   54859 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1209 23:23:44.147306   54859 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1209 23:23:44.147312   54859 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1209 23:23:44.147319   54859 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1209 23:23:44.147332   54859 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1209 23:23:44.147343   54859 command_runner.go:130] > [crio.runtime]
	I1209 23:23:44.147354   54859 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1209 23:23:44.147367   54859 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1209 23:23:44.147375   54859 command_runner.go:130] > # "nofile=1024:2048"
	I1209 23:23:44.147390   54859 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1209 23:23:44.147402   54859 command_runner.go:130] > # default_ulimits = [
	I1209 23:23:44.147410   54859 command_runner.go:130] > # ]
	I1209 23:23:44.147418   54859 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1209 23:23:44.147424   54859 command_runner.go:130] > # no_pivot = false
	I1209 23:23:44.147434   54859 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1209 23:23:44.147448   54859 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1209 23:23:44.147462   54859 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1209 23:23:44.147477   54859 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1209 23:23:44.147486   54859 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1209 23:23:44.147498   54859 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1209 23:23:44.147517   54859 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1209 23:23:44.147530   54859 command_runner.go:130] > # Cgroup setting for conmon
	I1209 23:23:44.147542   54859 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1209 23:23:44.147553   54859 command_runner.go:130] > conmon_cgroup = "pod"
	I1209 23:23:44.147581   54859 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1209 23:23:44.147595   54859 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1209 23:23:44.147610   54859 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1209 23:23:44.147619   54859 command_runner.go:130] > conmon_env = [
	I1209 23:23:44.147634   54859 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1209 23:23:44.147651   54859 command_runner.go:130] > ]
	I1209 23:23:44.147665   54859 command_runner.go:130] > # Additional environment variables to set for all the
	I1209 23:23:44.147677   54859 command_runner.go:130] > # containers. These are overridden if set in the
	I1209 23:23:44.147690   54859 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1209 23:23:44.147699   54859 command_runner.go:130] > # default_env = [
	I1209 23:23:44.147709   54859 command_runner.go:130] > # ]
	I1209 23:23:44.147721   54859 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1209 23:23:44.147737   54859 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1209 23:23:44.147747   54859 command_runner.go:130] > # selinux = false
	I1209 23:23:44.147759   54859 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1209 23:23:44.147772   54859 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1209 23:23:44.147783   54859 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1209 23:23:44.147794   54859 command_runner.go:130] > # seccomp_profile = ""
	I1209 23:23:44.147806   54859 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1209 23:23:44.147819   54859 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1209 23:23:44.147834   54859 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1209 23:23:44.147846   54859 command_runner.go:130] > # which might increase security.
	I1209 23:23:44.147858   54859 command_runner.go:130] > # This option is currently deprecated,
	I1209 23:23:44.147869   54859 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1209 23:23:44.147881   54859 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1209 23:23:44.147897   54859 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1209 23:23:44.147910   54859 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1209 23:23:44.147921   54859 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1209 23:23:44.147936   54859 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1209 23:23:44.147949   54859 command_runner.go:130] > # This option supports live configuration reload.
	I1209 23:23:44.147960   54859 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1209 23:23:44.147971   54859 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1209 23:23:44.147982   54859 command_runner.go:130] > # the cgroup blockio controller.
	I1209 23:23:44.147991   54859 command_runner.go:130] > # blockio_config_file = ""
	I1209 23:23:44.148006   54859 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1209 23:23:44.148016   54859 command_runner.go:130] > # blockio parameters.
	I1209 23:23:44.148025   54859 command_runner.go:130] > # blockio_reload = false
	I1209 23:23:44.148040   54859 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1209 23:23:44.148051   54859 command_runner.go:130] > # irqbalance daemon.
	I1209 23:23:44.148062   54859 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1209 23:23:44.148077   54859 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1209 23:23:44.148093   54859 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1209 23:23:44.148108   54859 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1209 23:23:44.148127   54859 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1209 23:23:44.148142   54859 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1209 23:23:44.148154   54859 command_runner.go:130] > # This option supports live configuration reload.
	I1209 23:23:44.148162   54859 command_runner.go:130] > # rdt_config_file = ""
	I1209 23:23:44.148178   54859 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1209 23:23:44.148187   54859 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1209 23:23:44.148211   54859 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1209 23:23:44.148224   54859 command_runner.go:130] > # separate_pull_cgroup = ""
	I1209 23:23:44.148238   54859 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1209 23:23:44.148248   54859 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1209 23:23:44.148259   54859 command_runner.go:130] > # will be added.
	I1209 23:23:44.148268   54859 command_runner.go:130] > # default_capabilities = [
	I1209 23:23:44.148277   54859 command_runner.go:130] > # 	"CHOWN",
	I1209 23:23:44.148283   54859 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1209 23:23:44.148287   54859 command_runner.go:130] > # 	"FSETID",
	I1209 23:23:44.148291   54859 command_runner.go:130] > # 	"FOWNER",
	I1209 23:23:44.148296   54859 command_runner.go:130] > # 	"SETGID",
	I1209 23:23:44.148302   54859 command_runner.go:130] > # 	"SETUID",
	I1209 23:23:44.148306   54859 command_runner.go:130] > # 	"SETPCAP",
	I1209 23:23:44.148310   54859 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1209 23:23:44.148315   54859 command_runner.go:130] > # 	"KILL",
	I1209 23:23:44.148318   54859 command_runner.go:130] > # ]
	I1209 23:23:44.148325   54859 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1209 23:23:44.148334   54859 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1209 23:23:44.148339   54859 command_runner.go:130] > # add_inheritable_capabilities = false
	I1209 23:23:44.148347   54859 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1209 23:23:44.148353   54859 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1209 23:23:44.148356   54859 command_runner.go:130] > default_sysctls = [
	I1209 23:23:44.148361   54859 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1209 23:23:44.148370   54859 command_runner.go:130] > ]
	I1209 23:23:44.148378   54859 command_runner.go:130] > # List of devices on the host that a
	I1209 23:23:44.148391   54859 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1209 23:23:44.148406   54859 command_runner.go:130] > # allowed_devices = [
	I1209 23:23:44.148423   54859 command_runner.go:130] > # 	"/dev/fuse",
	I1209 23:23:44.148434   54859 command_runner.go:130] > # ]
	I1209 23:23:44.148448   54859 command_runner.go:130] > # List of additional devices. specified as
	I1209 23:23:44.148464   54859 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1209 23:23:44.148477   54859 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1209 23:23:44.148491   54859 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1209 23:23:44.148503   54859 command_runner.go:130] > # additional_devices = [
	I1209 23:23:44.148518   54859 command_runner.go:130] > # ]
	I1209 23:23:44.148531   54859 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1209 23:23:44.148547   54859 command_runner.go:130] > # cdi_spec_dirs = [
	I1209 23:23:44.148558   54859 command_runner.go:130] > # 	"/etc/cdi",
	I1209 23:23:44.148566   54859 command_runner.go:130] > # 	"/var/run/cdi",
	I1209 23:23:44.148575   54859 command_runner.go:130] > # ]
	I1209 23:23:44.148585   54859 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1209 23:23:44.148598   54859 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1209 23:23:44.148609   54859 command_runner.go:130] > # Defaults to false.
	I1209 23:23:44.148620   54859 command_runner.go:130] > # device_ownership_from_security_context = false
	I1209 23:23:44.148636   54859 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1209 23:23:44.148650   54859 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1209 23:23:44.148658   54859 command_runner.go:130] > # hooks_dir = [
	I1209 23:23:44.148670   54859 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1209 23:23:44.148675   54859 command_runner.go:130] > # ]
	I1209 23:23:44.148685   54859 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1209 23:23:44.148699   54859 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1209 23:23:44.148713   54859 command_runner.go:130] > # its default mounts from the following two files:
	I1209 23:23:44.148724   54859 command_runner.go:130] > #
	I1209 23:23:44.148737   54859 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1209 23:23:44.148752   54859 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1209 23:23:44.148766   54859 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1209 23:23:44.148772   54859 command_runner.go:130] > #
	I1209 23:23:44.148789   54859 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1209 23:23:44.148802   54859 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1209 23:23:44.148817   54859 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1209 23:23:44.148831   54859 command_runner.go:130] > #      only add mounts it finds in this file.
	I1209 23:23:44.148842   54859 command_runner.go:130] > #
	I1209 23:23:44.148851   54859 command_runner.go:130] > # default_mounts_file = ""
	I1209 23:23:44.148864   54859 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1209 23:23:44.148876   54859 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1209 23:23:44.148886   54859 command_runner.go:130] > pids_limit = 1024
	I1209 23:23:44.148894   54859 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1209 23:23:44.148907   54859 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1209 23:23:44.148923   54859 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1209 23:23:44.148941   54859 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1209 23:23:44.148953   54859 command_runner.go:130] > # log_size_max = -1
	I1209 23:23:44.148964   54859 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1209 23:23:44.148976   54859 command_runner.go:130] > # log_to_journald = false
	I1209 23:23:44.148987   54859 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1209 23:23:44.148999   54859 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1209 23:23:44.149019   54859 command_runner.go:130] > # Path to directory for container attach sockets.
	I1209 23:23:44.149031   54859 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1209 23:23:44.149045   54859 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1209 23:23:44.149056   54859 command_runner.go:130] > # bind_mount_prefix = ""
	I1209 23:23:44.149070   54859 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1209 23:23:44.149081   54859 command_runner.go:130] > # read_only = false
	I1209 23:23:44.149093   54859 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1209 23:23:44.149107   54859 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1209 23:23:44.149119   54859 command_runner.go:130] > # live configuration reload.
	I1209 23:23:44.149127   54859 command_runner.go:130] > # log_level = "info"
	I1209 23:23:44.149138   54859 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1209 23:23:44.149150   54859 command_runner.go:130] > # This option supports live configuration reload.
	I1209 23:23:44.149160   54859 command_runner.go:130] > # log_filter = ""
	I1209 23:23:44.149174   54859 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1209 23:23:44.149189   54859 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1209 23:23:44.149202   54859 command_runner.go:130] > # separated by comma.
	I1209 23:23:44.149218   54859 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 23:23:44.149231   54859 command_runner.go:130] > # uid_mappings = ""
	I1209 23:23:44.149245   54859 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1209 23:23:44.149263   54859 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1209 23:23:44.149274   54859 command_runner.go:130] > # separated by comma.
	I1209 23:23:44.149317   54859 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 23:23:44.149342   54859 command_runner.go:130] > # gid_mappings = ""
	I1209 23:23:44.149350   54859 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1209 23:23:44.149366   54859 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1209 23:23:44.149377   54859 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1209 23:23:44.149389   54859 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 23:23:44.149399   54859 command_runner.go:130] > # minimum_mappable_uid = -1
	I1209 23:23:44.149407   54859 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1209 23:23:44.149423   54859 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1209 23:23:44.149435   54859 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1209 23:23:44.149447   54859 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 23:23:44.149458   54859 command_runner.go:130] > # minimum_mappable_gid = -1
	I1209 23:23:44.149468   54859 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1209 23:23:44.149481   54859 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1209 23:23:44.149493   54859 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1209 23:23:44.149512   54859 command_runner.go:130] > # ctr_stop_timeout = 30
	I1209 23:23:44.149524   54859 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1209 23:23:44.149533   54859 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1209 23:23:44.149545   54859 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1209 23:23:44.149552   54859 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1209 23:23:44.149563   54859 command_runner.go:130] > drop_infra_ctr = false
	I1209 23:23:44.149572   54859 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1209 23:23:44.149584   54859 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1209 23:23:44.149599   54859 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1209 23:23:44.149609   54859 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1209 23:23:44.149621   54859 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1209 23:23:44.149633   54859 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1209 23:23:44.149645   54859 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1209 23:23:44.149657   54859 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1209 23:23:44.149667   54859 command_runner.go:130] > # shared_cpuset = ""
	I1209 23:23:44.149677   54859 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1209 23:23:44.149693   54859 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1209 23:23:44.149703   54859 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1209 23:23:44.149714   54859 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1209 23:23:44.149725   54859 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1209 23:23:44.149734   54859 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1209 23:23:44.149747   54859 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1209 23:23:44.149757   54859 command_runner.go:130] > # enable_criu_support = false
	I1209 23:23:44.149766   54859 command_runner.go:130] > # Enable/disable the generation of the container,
	I1209 23:23:44.149778   54859 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1209 23:23:44.149788   54859 command_runner.go:130] > # enable_pod_events = false
	I1209 23:23:44.149801   54859 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1209 23:23:44.149815   54859 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1209 23:23:44.149826   54859 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1209 23:23:44.149833   54859 command_runner.go:130] > # default_runtime = "runc"
	I1209 23:23:44.149845   54859 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1209 23:23:44.149862   54859 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1209 23:23:44.149874   54859 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1209 23:23:44.149881   54859 command_runner.go:130] > # creation as a file is not desired either.
	I1209 23:23:44.149890   54859 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1209 23:23:44.149899   54859 command_runner.go:130] > # the hostname is being managed dynamically.
	I1209 23:23:44.149904   54859 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1209 23:23:44.149907   54859 command_runner.go:130] > # ]
	I1209 23:23:44.149913   54859 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1209 23:23:44.149921   54859 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1209 23:23:44.149927   54859 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1209 23:23:44.149932   54859 command_runner.go:130] > # Each entry in the table should follow the format:
	I1209 23:23:44.149935   54859 command_runner.go:130] > #
	I1209 23:23:44.149940   54859 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1209 23:23:44.149945   54859 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1209 23:23:44.149967   54859 command_runner.go:130] > # runtime_type = "oci"
	I1209 23:23:44.149977   54859 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1209 23:23:44.149985   54859 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1209 23:23:44.149995   54859 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1209 23:23:44.150003   54859 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1209 23:23:44.150013   54859 command_runner.go:130] > # monitor_env = []
	I1209 23:23:44.150021   54859 command_runner.go:130] > # privileged_without_host_devices = false
	I1209 23:23:44.150031   54859 command_runner.go:130] > # allowed_annotations = []
	I1209 23:23:44.150040   54859 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1209 23:23:44.150049   54859 command_runner.go:130] > # Where:
	I1209 23:23:44.150060   54859 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1209 23:23:44.150073   54859 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1209 23:23:44.150086   54859 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1209 23:23:44.150098   54859 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1209 23:23:44.150107   54859 command_runner.go:130] > #   in $PATH.
	I1209 23:23:44.150117   54859 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1209 23:23:44.150126   54859 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1209 23:23:44.150132   54859 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1209 23:23:44.150138   54859 command_runner.go:130] > #   state.
	I1209 23:23:44.150144   54859 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1209 23:23:44.150152   54859 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1209 23:23:44.150158   54859 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1209 23:23:44.150166   54859 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1209 23:23:44.150172   54859 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1209 23:23:44.150180   54859 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1209 23:23:44.150184   54859 command_runner.go:130] > #   The currently recognized values are:
	I1209 23:23:44.150193   54859 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1209 23:23:44.150199   54859 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1209 23:23:44.150209   54859 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1209 23:23:44.150215   54859 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1209 23:23:44.150224   54859 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1209 23:23:44.150230   54859 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1209 23:23:44.150238   54859 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1209 23:23:44.150245   54859 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1209 23:23:44.150253   54859 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1209 23:23:44.150258   54859 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1209 23:23:44.150263   54859 command_runner.go:130] > #   deprecated option "conmon".
	I1209 23:23:44.150270   54859 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1209 23:23:44.150277   54859 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1209 23:23:44.150283   54859 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1209 23:23:44.150294   54859 command_runner.go:130] > #   should be moved to the container's cgroup
	I1209 23:23:44.150300   54859 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1209 23:23:44.150308   54859 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1209 23:23:44.150313   54859 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1209 23:23:44.150318   54859 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1209 23:23:44.150323   54859 command_runner.go:130] > #
	I1209 23:23:44.150328   54859 command_runner.go:130] > # Using the seccomp notifier feature:
	I1209 23:23:44.150333   54859 command_runner.go:130] > #
	I1209 23:23:44.150339   54859 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1209 23:23:44.150347   54859 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1209 23:23:44.150351   54859 command_runner.go:130] > #
	I1209 23:23:44.150356   54859 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1209 23:23:44.150364   54859 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1209 23:23:44.150367   54859 command_runner.go:130] > #
	I1209 23:23:44.150373   54859 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1209 23:23:44.150379   54859 command_runner.go:130] > # feature.
	I1209 23:23:44.150382   54859 command_runner.go:130] > #
	I1209 23:23:44.150387   54859 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1209 23:23:44.150395   54859 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1209 23:23:44.150401   54859 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1209 23:23:44.150409   54859 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1209 23:23:44.150415   54859 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1209 23:23:44.150418   54859 command_runner.go:130] > #
	I1209 23:23:44.150424   54859 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1209 23:23:44.150433   54859 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1209 23:23:44.150439   54859 command_runner.go:130] > #
	I1209 23:23:44.150445   54859 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1209 23:23:44.150453   54859 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1209 23:23:44.150456   54859 command_runner.go:130] > #
	I1209 23:23:44.150461   54859 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1209 23:23:44.150469   54859 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1209 23:23:44.150473   54859 command_runner.go:130] > # limitation.
	I1209 23:23:44.150478   54859 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1209 23:23:44.150482   54859 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1209 23:23:44.150488   54859 command_runner.go:130] > runtime_type = "oci"
	I1209 23:23:44.150492   54859 command_runner.go:130] > runtime_root = "/run/runc"
	I1209 23:23:44.150496   54859 command_runner.go:130] > runtime_config_path = ""
	I1209 23:23:44.150502   54859 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1209 23:23:44.150508   54859 command_runner.go:130] > monitor_cgroup = "pod"
	I1209 23:23:44.150514   54859 command_runner.go:130] > monitor_exec_cgroup = ""
	I1209 23:23:44.150518   54859 command_runner.go:130] > monitor_env = [
	I1209 23:23:44.150525   54859 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1209 23:23:44.150528   54859 command_runner.go:130] > ]
	I1209 23:23:44.150535   54859 command_runner.go:130] > privileged_without_host_devices = false
	I1209 23:23:44.150544   54859 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1209 23:23:44.150552   54859 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1209 23:23:44.150558   54859 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1209 23:23:44.150567   54859 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1209 23:23:44.150574   54859 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1209 23:23:44.150581   54859 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1209 23:23:44.150590   54859 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1209 23:23:44.150599   54859 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1209 23:23:44.150604   54859 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1209 23:23:44.150613   54859 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1209 23:23:44.150619   54859 command_runner.go:130] > # Example:
	I1209 23:23:44.150623   54859 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1209 23:23:44.150628   54859 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1209 23:23:44.150632   54859 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1209 23:23:44.150637   54859 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1209 23:23:44.150641   54859 command_runner.go:130] > # cpuset = 0
	I1209 23:23:44.150645   54859 command_runner.go:130] > # cpushares = "0-1"
	I1209 23:23:44.150648   54859 command_runner.go:130] > # Where:
	I1209 23:23:44.150654   54859 command_runner.go:130] > # The workload name is workload-type.
	I1209 23:23:44.150660   54859 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1209 23:23:44.150665   54859 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1209 23:23:44.150670   54859 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1209 23:23:44.150676   54859 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1209 23:23:44.150681   54859 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1209 23:23:44.150686   54859 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1209 23:23:44.150691   54859 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1209 23:23:44.150695   54859 command_runner.go:130] > # Default value is set to true
	I1209 23:23:44.150699   54859 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1209 23:23:44.150704   54859 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1209 23:23:44.150708   54859 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1209 23:23:44.150712   54859 command_runner.go:130] > # Default value is set to 'false'
	I1209 23:23:44.150715   54859 command_runner.go:130] > # disable_hostport_mapping = false
	I1209 23:23:44.150721   54859 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1209 23:23:44.150724   54859 command_runner.go:130] > #
	I1209 23:23:44.150729   54859 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1209 23:23:44.150735   54859 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1209 23:23:44.150741   54859 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1209 23:23:44.150749   54859 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1209 23:23:44.150754   54859 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1209 23:23:44.150760   54859 command_runner.go:130] > [crio.image]
	I1209 23:23:44.150766   54859 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1209 23:23:44.150772   54859 command_runner.go:130] > # default_transport = "docker://"
	I1209 23:23:44.150778   54859 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1209 23:23:44.150786   54859 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1209 23:23:44.150790   54859 command_runner.go:130] > # global_auth_file = ""
	I1209 23:23:44.150795   54859 command_runner.go:130] > # The image used to instantiate infra containers.
	I1209 23:23:44.150803   54859 command_runner.go:130] > # This option supports live configuration reload.
	I1209 23:23:44.150808   54859 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1209 23:23:44.150817   54859 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1209 23:23:44.150823   54859 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1209 23:23:44.150830   54859 command_runner.go:130] > # This option supports live configuration reload.
	I1209 23:23:44.150834   54859 command_runner.go:130] > # pause_image_auth_file = ""
	I1209 23:23:44.150842   54859 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1209 23:23:44.150847   54859 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1209 23:23:44.150857   54859 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1209 23:23:44.150865   54859 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1209 23:23:44.150871   54859 command_runner.go:130] > # pause_command = "/pause"
	I1209 23:23:44.150877   54859 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1209 23:23:44.150885   54859 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1209 23:23:44.150890   54859 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1209 23:23:44.150897   54859 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1209 23:23:44.150903   54859 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1209 23:23:44.150911   54859 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1209 23:23:44.150917   54859 command_runner.go:130] > # pinned_images = [
	I1209 23:23:44.150920   54859 command_runner.go:130] > # ]
	I1209 23:23:44.150926   54859 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1209 23:23:44.150934   54859 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1209 23:23:44.150944   54859 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1209 23:23:44.150951   54859 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1209 23:23:44.150959   54859 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1209 23:23:44.150963   54859 command_runner.go:130] > # signature_policy = ""
	I1209 23:23:44.150970   54859 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1209 23:23:44.150976   54859 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1209 23:23:44.150985   54859 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1209 23:23:44.150991   54859 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1209 23:23:44.150998   54859 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1209 23:23:44.151003   54859 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1209 23:23:44.151011   54859 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1209 23:23:44.151017   54859 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1209 23:23:44.151023   54859 command_runner.go:130] > # changing them here.
	I1209 23:23:44.151027   54859 command_runner.go:130] > # insecure_registries = [
	I1209 23:23:44.151033   54859 command_runner.go:130] > # ]
	I1209 23:23:44.151039   54859 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1209 23:23:44.151046   54859 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1209 23:23:44.151050   54859 command_runner.go:130] > # image_volumes = "mkdir"
	I1209 23:23:44.151055   54859 command_runner.go:130] > # Temporary directory to use for storing big files
	I1209 23:23:44.151062   54859 command_runner.go:130] > # big_files_temporary_dir = ""
	I1209 23:23:44.151068   54859 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1209 23:23:44.151074   54859 command_runner.go:130] > # CNI plugins.
	I1209 23:23:44.151078   54859 command_runner.go:130] > [crio.network]
	I1209 23:23:44.151083   54859 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1209 23:23:44.151092   54859 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1209 23:23:44.151098   54859 command_runner.go:130] > # cni_default_network = ""
	I1209 23:23:44.151104   54859 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1209 23:23:44.151110   54859 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1209 23:23:44.151115   54859 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1209 23:23:44.151121   54859 command_runner.go:130] > # plugin_dirs = [
	I1209 23:23:44.151125   54859 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1209 23:23:44.151131   54859 command_runner.go:130] > # ]
	I1209 23:23:44.151137   54859 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1209 23:23:44.151143   54859 command_runner.go:130] > [crio.metrics]
	I1209 23:23:44.151148   54859 command_runner.go:130] > # Globally enable or disable metrics support.
	I1209 23:23:44.151154   54859 command_runner.go:130] > enable_metrics = true
	I1209 23:23:44.151159   54859 command_runner.go:130] > # Specify enabled metrics collectors.
	I1209 23:23:44.151165   54859 command_runner.go:130] > # Per default all metrics are enabled.
	I1209 23:23:44.151171   54859 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1209 23:23:44.151180   54859 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1209 23:23:44.151188   54859 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1209 23:23:44.151194   54859 command_runner.go:130] > # metrics_collectors = [
	I1209 23:23:44.151200   54859 command_runner.go:130] > # 	"operations",
	I1209 23:23:44.151204   54859 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1209 23:23:44.151211   54859 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1209 23:23:44.151215   54859 command_runner.go:130] > # 	"operations_errors",
	I1209 23:23:44.151221   54859 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1209 23:23:44.151226   54859 command_runner.go:130] > # 	"image_pulls_by_name",
	I1209 23:23:44.151233   54859 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1209 23:23:44.151237   54859 command_runner.go:130] > # 	"image_pulls_failures",
	I1209 23:23:44.151242   54859 command_runner.go:130] > # 	"image_pulls_successes",
	I1209 23:23:44.151246   54859 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1209 23:23:44.151250   54859 command_runner.go:130] > # 	"image_layer_reuse",
	I1209 23:23:44.151254   54859 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1209 23:23:44.151260   54859 command_runner.go:130] > # 	"containers_oom_total",
	I1209 23:23:44.151264   54859 command_runner.go:130] > # 	"containers_oom",
	I1209 23:23:44.151268   54859 command_runner.go:130] > # 	"processes_defunct",
	I1209 23:23:44.151274   54859 command_runner.go:130] > # 	"operations_total",
	I1209 23:23:44.151277   54859 command_runner.go:130] > # 	"operations_latency_seconds",
	I1209 23:23:44.151282   54859 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1209 23:23:44.151299   54859 command_runner.go:130] > # 	"operations_errors_total",
	I1209 23:23:44.151306   54859 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1209 23:23:44.151310   54859 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1209 23:23:44.151316   54859 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1209 23:23:44.151320   54859 command_runner.go:130] > # 	"image_pulls_success_total",
	I1209 23:23:44.151326   54859 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1209 23:23:44.151330   54859 command_runner.go:130] > # 	"containers_oom_count_total",
	I1209 23:23:44.151337   54859 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1209 23:23:44.151341   54859 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1209 23:23:44.151349   54859 command_runner.go:130] > # ]
	I1209 23:23:44.151356   54859 command_runner.go:130] > # The port on which the metrics server will listen.
	I1209 23:23:44.151360   54859 command_runner.go:130] > # metrics_port = 9090
	I1209 23:23:44.151367   54859 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1209 23:23:44.151371   54859 command_runner.go:130] > # metrics_socket = ""
	I1209 23:23:44.151375   54859 command_runner.go:130] > # The certificate for the secure metrics server.
	I1209 23:23:44.151383   54859 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1209 23:23:44.151389   54859 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1209 23:23:44.151396   54859 command_runner.go:130] > # certificate on any modification event.
	I1209 23:23:44.151401   54859 command_runner.go:130] > # metrics_cert = ""
	I1209 23:23:44.151408   54859 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1209 23:23:44.151416   54859 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1209 23:23:44.151420   54859 command_runner.go:130] > # metrics_key = ""
	I1209 23:23:44.151427   54859 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1209 23:23:44.151431   54859 command_runner.go:130] > [crio.tracing]
	I1209 23:23:44.151436   54859 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1209 23:23:44.151442   54859 command_runner.go:130] > # enable_tracing = false
	I1209 23:23:44.151448   54859 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1209 23:23:44.151454   54859 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1209 23:23:44.151460   54859 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1209 23:23:44.151467   54859 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1209 23:23:44.151471   54859 command_runner.go:130] > # CRI-O NRI configuration.
	I1209 23:23:44.151479   54859 command_runner.go:130] > [crio.nri]
	I1209 23:23:44.151484   54859 command_runner.go:130] > # Globally enable or disable NRI.
	I1209 23:23:44.151490   54859 command_runner.go:130] > # enable_nri = false
	I1209 23:23:44.151494   54859 command_runner.go:130] > # NRI socket to listen on.
	I1209 23:23:44.151501   54859 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1209 23:23:44.151505   54859 command_runner.go:130] > # NRI plugin directory to use.
	I1209 23:23:44.151512   54859 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1209 23:23:44.151517   54859 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1209 23:23:44.151524   54859 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1209 23:23:44.151529   54859 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1209 23:23:44.151536   54859 command_runner.go:130] > # nri_disable_connections = false
	I1209 23:23:44.151542   54859 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1209 23:23:44.151549   54859 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1209 23:23:44.151554   54859 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1209 23:23:44.151573   54859 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1209 23:23:44.151584   54859 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1209 23:23:44.151593   54859 command_runner.go:130] > [crio.stats]
	I1209 23:23:44.151599   54859 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1209 23:23:44.151606   54859 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1209 23:23:44.151611   54859 command_runner.go:130] > # stats_collection_period = 0
	I1209 23:23:44.151977   54859 command_runner.go:130] ! time="2024-12-09 23:23:44.118567274Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1209 23:23:44.152006   54859 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1209 23:23:44.152101   54859 cni.go:84] Creating CNI manager for ""
	I1209 23:23:44.152112   54859 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1209 23:23:44.152120   54859 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:23:44.152140   54859 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.48 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-555395 NodeName:multinode-555395 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:23:44.152246   54859 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-555395"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.48"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.48"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:23:44.152322   54859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:23:44.162016   54859 command_runner.go:130] > kubeadm
	I1209 23:23:44.162124   54859 command_runner.go:130] > kubectl
	I1209 23:23:44.162136   54859 command_runner.go:130] > kubelet
	I1209 23:23:44.162160   54859 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:23:44.162209   54859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:23:44.170972   54859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1209 23:23:44.187310   54859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:23:44.202741   54859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1209 23:23:44.218882   54859 ssh_runner.go:195] Run: grep 192.168.39.48	control-plane.minikube.internal$ /etc/hosts
	I1209 23:23:44.222602   54859 command_runner.go:130] > 192.168.39.48	control-plane.minikube.internal
	I1209 23:23:44.222662   54859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:23:44.364188   54859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:23:44.378381   54859 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395 for IP: 192.168.39.48
	I1209 23:23:44.378400   54859 certs.go:194] generating shared ca certs ...
	I1209 23:23:44.378417   54859 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:23:44.378576   54859 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:23:44.378618   54859 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:23:44.378627   54859 certs.go:256] generating profile certs ...
	I1209 23:23:44.378732   54859 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/client.key
	I1209 23:23:44.378790   54859 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/apiserver.key.44de7fae
	I1209 23:23:44.378820   54859 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/proxy-client.key
	I1209 23:23:44.378828   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 23:23:44.378841   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 23:23:44.378853   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 23:23:44.378865   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 23:23:44.378879   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 23:23:44.378891   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 23:23:44.378904   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 23:23:44.378919   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 23:23:44.378964   54859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:23:44.378989   54859 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:23:44.378999   54859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:23:44.379027   54859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:23:44.379050   54859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:23:44.379070   54859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:23:44.379108   54859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:23:44.379133   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:23:44.379146   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem -> /usr/share/ca-certificates/26253.pem
	I1209 23:23:44.379158   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /usr/share/ca-certificates/262532.pem
	I1209 23:23:44.379767   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:23:44.402672   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:23:44.425249   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:23:44.448241   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:23:44.471457   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 23:23:44.494441   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 23:23:44.517282   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:23:44.540207   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 23:23:44.564410   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:23:44.587614   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:23:44.610807   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:23:44.633934   54859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:23:44.650348   54859 ssh_runner.go:195] Run: openssl version
	I1209 23:23:44.655688   54859 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1209 23:23:44.655764   54859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:23:44.666103   54859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:23:44.670224   54859 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:23:44.670253   54859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:23:44.670291   54859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:23:44.675578   54859 command_runner.go:130] > b5213941
	I1209 23:23:44.675650   54859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:23:44.684973   54859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:23:44.695517   54859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:23:44.700007   54859 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:23:44.700047   54859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:23:44.700098   54859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:23:44.705369   54859 command_runner.go:130] > 51391683
	I1209 23:23:44.705653   54859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:23:44.714771   54859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:23:44.725054   54859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:23:44.729360   54859 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:23:44.729446   54859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:23:44.729506   54859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:23:44.734993   54859 command_runner.go:130] > 3ec20f2e
	I1209 23:23:44.735068   54859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:23:44.744259   54859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:23:44.748576   54859 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:23:44.748603   54859 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1209 23:23:44.748609   54859 command_runner.go:130] > Device: 253,1	Inode: 2103342     Links: 1
	I1209 23:23:44.748615   54859 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 23:23:44.748621   54859 command_runner.go:130] > Access: 2024-12-09 23:17:04.148672813 +0000
	I1209 23:23:44.748625   54859 command_runner.go:130] > Modify: 2024-12-09 23:17:04.148672813 +0000
	I1209 23:23:44.748630   54859 command_runner.go:130] > Change: 2024-12-09 23:17:04.148672813 +0000
	I1209 23:23:44.748634   54859 command_runner.go:130] >  Birth: 2024-12-09 23:17:04.148672813 +0000
	I1209 23:23:44.748747   54859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:23:44.754153   54859 command_runner.go:130] > Certificate will not expire
	I1209 23:23:44.754215   54859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:23:44.759590   54859 command_runner.go:130] > Certificate will not expire
	I1209 23:23:44.759655   54859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:23:44.764924   54859 command_runner.go:130] > Certificate will not expire
	I1209 23:23:44.765044   54859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:23:44.770622   54859 command_runner.go:130] > Certificate will not expire
	I1209 23:23:44.770685   54859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:23:44.776302   54859 command_runner.go:130] > Certificate will not expire
	I1209 23:23:44.776357   54859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:23:44.781987   54859 command_runner.go:130] > Certificate will not expire
	I1209 23:23:44.782185   54859 kubeadm.go:392] StartCluster: {Name:multinode-555395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-555395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.99 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:23:44.782293   54859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:23:44.782332   54859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:23:44.816639   54859 command_runner.go:130] > c99c37247d46e67e7e3c175977b1a7a5ba72c2f9e8be6c93c316073beb8b032e
	I1209 23:23:44.816668   54859 command_runner.go:130] > 7bac2bd12d4ffacf75cbd5a00d9280ccfc1a1c1807e3f86a0c2aa7400908cf64
	I1209 23:23:44.816674   54859 command_runner.go:130] > a94a2e0e32a100747e56aef38dc31c3dfdbc3c578cc30581623ccec551c3fcca
	I1209 23:23:44.816681   54859 command_runner.go:130] > fed5163426ab16376195c7072f36efa6c0bce7f8a8175a2e77f24480eca551d0
	I1209 23:23:44.816686   54859 command_runner.go:130] > c6c47bc106b079e6cce87013ca67f3a03a7bc882f443116ec7bee46ffd42fa85
	I1209 23:23:44.816691   54859 command_runner.go:130] > f8c195b1f78b47fcd63aa8d06fa74e569778d428e61743248aa7bd367a262022
	I1209 23:23:44.816697   54859 command_runner.go:130] > 0665312a47f2878d2be0a2909cd95c8fd738cc5f6cebf3d426c3a78611cfccea
	I1209 23:23:44.816703   54859 command_runner.go:130] > cd931d3954579a9b381c2e73ddd63c6c2fafa6f777f2d704d1b38b37ef58b6f8
	I1209 23:23:44.817994   54859 cri.go:89] found id: "c99c37247d46e67e7e3c175977b1a7a5ba72c2f9e8be6c93c316073beb8b032e"
	I1209 23:23:44.818009   54859 cri.go:89] found id: "7bac2bd12d4ffacf75cbd5a00d9280ccfc1a1c1807e3f86a0c2aa7400908cf64"
	I1209 23:23:44.818012   54859 cri.go:89] found id: "a94a2e0e32a100747e56aef38dc31c3dfdbc3c578cc30581623ccec551c3fcca"
	I1209 23:23:44.818015   54859 cri.go:89] found id: "fed5163426ab16376195c7072f36efa6c0bce7f8a8175a2e77f24480eca551d0"
	I1209 23:23:44.818017   54859 cri.go:89] found id: "c6c47bc106b079e6cce87013ca67f3a03a7bc882f443116ec7bee46ffd42fa85"
	I1209 23:23:44.818021   54859 cri.go:89] found id: "f8c195b1f78b47fcd63aa8d06fa74e569778d428e61743248aa7bd367a262022"
	I1209 23:23:44.818023   54859 cri.go:89] found id: "0665312a47f2878d2be0a2909cd95c8fd738cc5f6cebf3d426c3a78611cfccea"
	I1209 23:23:44.818026   54859 cri.go:89] found id: "cd931d3954579a9b381c2e73ddd63c6c2fafa6f777f2d704d1b38b37ef58b6f8"
	I1209 23:23:44.818028   54859 cri.go:89] found id: ""
	I1209 23:23:44.818067   54859 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-555395 -n multinode-555395
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-555395 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (319.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 stop
E1209 23:26:15.592773   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-555395 stop: exit status 82 (2m0.474980255s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-555395-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-555395 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-555395 status: (18.740116552s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-555395 status --alsologtostderr: (3.387165025s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-555395 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-555395 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-555395 -n multinode-555395
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-555395 logs -n 25: (2.036298153s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-555395 ssh -n                                                                 | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-555395 cp multinode-555395-m02:/home/docker/cp-test.txt                       | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395:/home/docker/cp-test_multinode-555395-m02_multinode-555395.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n                                                                 | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n multinode-555395 sudo cat                                       | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | /home/docker/cp-test_multinode-555395-m02_multinode-555395.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-555395 cp multinode-555395-m02:/home/docker/cp-test.txt                       | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m03:/home/docker/cp-test_multinode-555395-m02_multinode-555395-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n                                                                 | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n multinode-555395-m03 sudo cat                                   | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | /home/docker/cp-test_multinode-555395-m02_multinode-555395-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-555395 cp testdata/cp-test.txt                                                | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n                                                                 | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-555395 cp multinode-555395-m03:/home/docker/cp-test.txt                       | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile813365822/001/cp-test_multinode-555395-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n                                                                 | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-555395 cp multinode-555395-m03:/home/docker/cp-test.txt                       | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395:/home/docker/cp-test_multinode-555395-m03_multinode-555395.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n                                                                 | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n multinode-555395 sudo cat                                       | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | /home/docker/cp-test_multinode-555395-m03_multinode-555395.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-555395 cp multinode-555395-m03:/home/docker/cp-test.txt                       | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m02:/home/docker/cp-test_multinode-555395-m03_multinode-555395-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n                                                                 | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | multinode-555395-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-555395 ssh -n multinode-555395-m02 sudo cat                                   | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	|         | /home/docker/cp-test_multinode-555395-m03_multinode-555395-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-555395 node stop m03                                                          | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:19 UTC |
	| node    | multinode-555395 node start                                                             | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:19 UTC | 09 Dec 24 23:20 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-555395                                                                | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC |                     |
	| stop    | -p multinode-555395                                                                     | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC |                     |
	| start   | -p multinode-555395                                                                     | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:22 UTC | 09 Dec 24 23:25 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-555395                                                                | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:25 UTC |                     |
	| node    | multinode-555395 node delete                                                            | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:25 UTC | 09 Dec 24 23:25 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-555395 stop                                                                   | multinode-555395 | jenkins | v1.34.0 | 09 Dec 24 23:25 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:22:10
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:22:10.739345   54859 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:22:10.739469   54859 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:22:10.739478   54859 out.go:358] Setting ErrFile to fd 2...
	I1209 23:22:10.739482   54859 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:22:10.739657   54859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 23:22:10.740172   54859 out.go:352] Setting JSON to false
	I1209 23:22:10.741060   54859 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7482,"bootTime":1733779049,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:22:10.741175   54859 start.go:139] virtualization: kvm guest
	I1209 23:22:10.743487   54859 out.go:177] * [multinode-555395] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:22:10.744832   54859 notify.go:220] Checking for updates...
	I1209 23:22:10.744875   54859 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:22:10.746521   54859 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:22:10.747932   54859 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:22:10.749136   54859 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 23:22:10.750183   54859 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:22:10.751360   54859 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:22:10.752894   54859 config.go:182] Loaded profile config "multinode-555395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:22:10.752970   54859 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:22:10.753396   54859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:22:10.753451   54859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:22:10.768491   54859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35673
	I1209 23:22:10.768940   54859 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:22:10.769517   54859 main.go:141] libmachine: Using API Version  1
	I1209 23:22:10.769536   54859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:22:10.769918   54859 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:22:10.770110   54859 main.go:141] libmachine: (multinode-555395) Calling .DriverName
	I1209 23:22:10.807017   54859 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 23:22:10.808160   54859 start.go:297] selected driver: kvm2
	I1209 23:22:10.808179   54859 start.go:901] validating driver "kvm2" against &{Name:multinode-555395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-555395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.99 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fa
lse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:22:10.808382   54859 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:22:10.808766   54859 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:22:10.808856   54859 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 23:22:10.825598   54859 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 23:22:10.826304   54859 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:22:10.826332   54859 cni.go:84] Creating CNI manager for ""
	I1209 23:22:10.826358   54859 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1209 23:22:10.826422   54859 start.go:340] cluster config:
	{Name:multinode-555395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-555395 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.99 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner
:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:22:10.826546   54859 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:22:10.829160   54859 out.go:177] * Starting "multinode-555395" primary control-plane node in "multinode-555395" cluster
	I1209 23:22:10.830539   54859 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:22:10.830583   54859 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 23:22:10.830591   54859 cache.go:56] Caching tarball of preloaded images
	I1209 23:22:10.830687   54859 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 23:22:10.830699   54859 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 23:22:10.830801   54859 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/config.json ...
	I1209 23:22:10.831041   54859 start.go:360] acquireMachinesLock for multinode-555395: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:22:10.831090   54859 start.go:364] duration metric: took 28.347µs to acquireMachinesLock for "multinode-555395"
	I1209 23:22:10.831107   54859 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:22:10.831112   54859 fix.go:54] fixHost starting: 
	I1209 23:22:10.831370   54859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:22:10.831402   54859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:22:10.846275   54859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34025
	I1209 23:22:10.846644   54859 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:22:10.847074   54859 main.go:141] libmachine: Using API Version  1
	I1209 23:22:10.847096   54859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:22:10.847422   54859 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:22:10.847593   54859 main.go:141] libmachine: (multinode-555395) Calling .DriverName
	I1209 23:22:10.847745   54859 main.go:141] libmachine: (multinode-555395) Calling .GetState
	I1209 23:22:10.849390   54859 fix.go:112] recreateIfNeeded on multinode-555395: state=Running err=<nil>
	W1209 23:22:10.849421   54859 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:22:10.851947   54859 out.go:177] * Updating the running kvm2 "multinode-555395" VM ...
	I1209 23:22:10.853211   54859 machine.go:93] provisionDockerMachine start ...
	I1209 23:22:10.853231   54859 main.go:141] libmachine: (multinode-555395) Calling .DriverName
	I1209 23:22:10.853420   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:22:10.855784   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:10.856173   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:22:10.856196   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:10.856353   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHPort
	I1209 23:22:10.856492   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:22:10.856645   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:22:10.856786   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHUsername
	I1209 23:22:10.856950   54859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:22:10.857160   54859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1209 23:22:10.857173   54859 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:22:10.967579   54859 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-555395
	
	I1209 23:22:10.967606   54859 main.go:141] libmachine: (multinode-555395) Calling .GetMachineName
	I1209 23:22:10.967874   54859 buildroot.go:166] provisioning hostname "multinode-555395"
	I1209 23:22:10.967923   54859 main.go:141] libmachine: (multinode-555395) Calling .GetMachineName
	I1209 23:22:10.968098   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:22:10.970425   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:10.970853   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:22:10.970879   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:10.971069   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHPort
	I1209 23:22:10.971268   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:22:10.971441   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:22:10.971613   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHUsername
	I1209 23:22:10.971835   54859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:22:10.972023   54859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1209 23:22:10.972041   54859 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-555395 && echo "multinode-555395" | sudo tee /etc/hostname
	I1209 23:22:11.096531   54859 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-555395
	
	I1209 23:22:11.096565   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:22:11.099454   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.099783   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:22:11.099814   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.099994   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHPort
	I1209 23:22:11.100160   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:22:11.100324   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:22:11.100468   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHUsername
	I1209 23:22:11.100663   54859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:22:11.100867   54859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1209 23:22:11.100885   54859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-555395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-555395/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-555395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:22:11.216936   54859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:22:11.216969   54859 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:22:11.217022   54859 buildroot.go:174] setting up certificates
	I1209 23:22:11.217032   54859 provision.go:84] configureAuth start
	I1209 23:22:11.217044   54859 main.go:141] libmachine: (multinode-555395) Calling .GetMachineName
	I1209 23:22:11.217346   54859 main.go:141] libmachine: (multinode-555395) Calling .GetIP
	I1209 23:22:11.220474   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.220890   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:22:11.220921   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.221096   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:22:11.223284   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.223624   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:22:11.223663   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.223825   54859 provision.go:143] copyHostCerts
	I1209 23:22:11.223888   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:22:11.223924   54859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:22:11.223937   54859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:22:11.224010   54859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:22:11.224088   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:22:11.224109   54859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:22:11.224118   54859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:22:11.224147   54859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:22:11.224202   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:22:11.224221   54859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:22:11.224228   54859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:22:11.224253   54859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:22:11.224329   54859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.multinode-555395 san=[127.0.0.1 192.168.39.48 localhost minikube multinode-555395]
	I1209 23:22:11.414438   54859 provision.go:177] copyRemoteCerts
	I1209 23:22:11.414512   54859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:22:11.414548   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:22:11.417124   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.417462   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:22:11.417497   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.417689   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHPort
	I1209 23:22:11.417848   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:22:11.418006   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHUsername
	I1209 23:22:11.418149   54859 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/multinode-555395/id_rsa Username:docker}
	I1209 23:22:11.502013   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 23:22:11.502083   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:22:11.526110   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 23:22:11.526185   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1209 23:22:11.551371   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 23:22:11.551450   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:22:11.576807   54859 provision.go:87] duration metric: took 359.762529ms to configureAuth
	I1209 23:22:11.576838   54859 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:22:11.577063   54859 config.go:182] Loaded profile config "multinode-555395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:22:11.577132   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:22:11.579900   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.580273   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:22:11.580306   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:22:11.580477   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHPort
	I1209 23:22:11.580677   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:22:11.580825   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:22:11.580983   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHUsername
	I1209 23:22:11.581131   54859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:22:11.581358   54859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1209 23:22:11.581381   54859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:23:42.286375   54859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:23:42.286403   54859 machine.go:96] duration metric: took 1m31.433176078s to provisionDockerMachine
	I1209 23:23:42.286422   54859 start.go:293] postStartSetup for "multinode-555395" (driver="kvm2")
	I1209 23:23:42.286437   54859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:23:42.286467   54859 main.go:141] libmachine: (multinode-555395) Calling .DriverName
	I1209 23:23:42.286876   54859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:23:42.286913   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:23:42.290177   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.290710   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:23:42.290734   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.290929   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHPort
	I1209 23:23:42.291089   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:23:42.291245   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHUsername
	I1209 23:23:42.291394   54859 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/multinode-555395/id_rsa Username:docker}
	I1209 23:23:42.378120   54859 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:23:42.382199   54859 command_runner.go:130] > NAME=Buildroot
	I1209 23:23:42.382218   54859 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1209 23:23:42.382222   54859 command_runner.go:130] > ID=buildroot
	I1209 23:23:42.382227   54859 command_runner.go:130] > VERSION_ID=2023.02.9
	I1209 23:23:42.382232   54859 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1209 23:23:42.382425   54859 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:23:42.382449   54859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:23:42.382536   54859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:23:42.382645   54859 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:23:42.382657   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /etc/ssl/certs/262532.pem
	I1209 23:23:42.382758   54859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:23:42.392008   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:23:42.416175   54859 start.go:296] duration metric: took 129.7373ms for postStartSetup
	I1209 23:23:42.416223   54859 fix.go:56] duration metric: took 1m31.585109382s for fixHost
	I1209 23:23:42.416262   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:23:42.418856   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.419201   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:23:42.419232   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.419407   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHPort
	I1209 23:23:42.419598   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:23:42.419768   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:23:42.419887   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHUsername
	I1209 23:23:42.420017   54859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:23:42.420191   54859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1209 23:23:42.420204   54859 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:23:42.528002   54859 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733786622.507933675
	
	I1209 23:23:42.528027   54859 fix.go:216] guest clock: 1733786622.507933675
	I1209 23:23:42.528035   54859 fix.go:229] Guest: 2024-12-09 23:23:42.507933675 +0000 UTC Remote: 2024-12-09 23:23:42.416228751 +0000 UTC m=+91.713477283 (delta=91.704924ms)
	I1209 23:23:42.528051   54859 fix.go:200] guest clock delta is within tolerance: 91.704924ms
	I1209 23:23:42.528056   54859 start.go:83] releasing machines lock for "multinode-555395", held for 1m31.696956201s
	I1209 23:23:42.528077   54859 main.go:141] libmachine: (multinode-555395) Calling .DriverName
	I1209 23:23:42.528327   54859 main.go:141] libmachine: (multinode-555395) Calling .GetIP
	I1209 23:23:42.530966   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.531345   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:23:42.531385   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.531551   54859 main.go:141] libmachine: (multinode-555395) Calling .DriverName
	I1209 23:23:42.532059   54859 main.go:141] libmachine: (multinode-555395) Calling .DriverName
	I1209 23:23:42.532245   54859 main.go:141] libmachine: (multinode-555395) Calling .DriverName
	I1209 23:23:42.532325   54859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:23:42.532387   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:23:42.532434   54859 ssh_runner.go:195] Run: cat /version.json
	I1209 23:23:42.532460   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:23:42.534695   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.534993   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.535026   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:23:42.535056   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.535261   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHPort
	I1209 23:23:42.535296   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:23:42.535322   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:42.535421   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:23:42.535486   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHPort
	I1209 23:23:42.535589   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHUsername
	I1209 23:23:42.535655   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:23:42.535713   54859 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/multinode-555395/id_rsa Username:docker}
	I1209 23:23:42.535764   54859 main.go:141] libmachine: (multinode-555395) Calling .GetSSHUsername
	I1209 23:23:42.535899   54859 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/multinode-555395/id_rsa Username:docker}
	I1209 23:23:42.625360   54859 command_runner.go:130] > {"iso_version": "v1.34.0-1730913550-19917", "kicbase_version": "v0.0.45-1730888964-19917", "minikube_version": "v1.34.0", "commit": "72f43dde5d92c8ae490d0727dad53fb3ed6aa41e"}
	I1209 23:23:42.643189   54859 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1209 23:23:42.644144   54859 ssh_runner.go:195] Run: systemctl --version
	I1209 23:23:42.650685   54859 command_runner.go:130] > systemd 252 (252)
	I1209 23:23:42.650717   54859 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1209 23:23:42.650773   54859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:23:42.804627   54859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 23:23:42.810076   54859 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1209 23:23:42.810432   54859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:23:42.810488   54859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:23:42.819486   54859 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 23:23:42.819509   54859 start.go:495] detecting cgroup driver to use...
	I1209 23:23:42.819574   54859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:23:42.835638   54859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:23:42.848550   54859 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:23:42.848610   54859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:23:42.861490   54859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:23:42.874313   54859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:23:43.023874   54859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:23:43.155667   54859 docker.go:233] disabling docker service ...
	I1209 23:23:43.155730   54859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:23:43.172225   54859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:23:43.185784   54859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:23:43.325351   54859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:23:43.468866   54859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:23:43.482453   54859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:23:43.500588   54859 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1209 23:23:43.500631   54859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:23:43.500676   54859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:23:43.510359   54859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:23:43.510423   54859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:23:43.520075   54859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:23:43.530173   54859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:23:43.539664   54859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:23:43.549511   54859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:23:43.559519   54859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:23:43.569908   54859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:23:43.579500   54859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:23:43.588180   54859 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1209 23:23:43.588246   54859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:23:43.596900   54859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:23:43.733673   54859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:23:43.924528   54859 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:23:43.924592   54859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:23:43.929277   54859 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1209 23:23:43.929310   54859 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1209 23:23:43.929317   54859 command_runner.go:130] > Device: 0,22	Inode: 1298        Links: 1
	I1209 23:23:43.929324   54859 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 23:23:43.929328   54859 command_runner.go:130] > Access: 2024-12-09 23:23:43.799670877 +0000
	I1209 23:23:43.929334   54859 command_runner.go:130] > Modify: 2024-12-09 23:23:43.799670877 +0000
	I1209 23:23:43.929339   54859 command_runner.go:130] > Change: 2024-12-09 23:23:43.799670877 +0000
	I1209 23:23:43.929342   54859 command_runner.go:130] >  Birth: -
	I1209 23:23:43.929452   54859 start.go:563] Will wait 60s for crictl version
	I1209 23:23:43.929522   54859 ssh_runner.go:195] Run: which crictl
	I1209 23:23:43.933004   54859 command_runner.go:130] > /usr/bin/crictl
	I1209 23:23:43.933078   54859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:23:43.968243   54859 command_runner.go:130] > Version:  0.1.0
	I1209 23:23:43.968273   54859 command_runner.go:130] > RuntimeName:  cri-o
	I1209 23:23:43.968281   54859 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1209 23:23:43.968289   54859 command_runner.go:130] > RuntimeApiVersion:  v1
	I1209 23:23:43.968317   54859 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:23:43.968377   54859 ssh_runner.go:195] Run: crio --version
	I1209 23:23:43.994880   54859 command_runner.go:130] > crio version 1.29.1
	I1209 23:23:43.994907   54859 command_runner.go:130] > Version:        1.29.1
	I1209 23:23:43.994914   54859 command_runner.go:130] > GitCommit:      unknown
	I1209 23:23:43.994918   54859 command_runner.go:130] > GitCommitDate:  unknown
	I1209 23:23:43.994922   54859 command_runner.go:130] > GitTreeState:   clean
	I1209 23:23:43.994927   54859 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1209 23:23:43.994931   54859 command_runner.go:130] > GoVersion:      go1.21.6
	I1209 23:23:43.994935   54859 command_runner.go:130] > Compiler:       gc
	I1209 23:23:43.994940   54859 command_runner.go:130] > Platform:       linux/amd64
	I1209 23:23:43.994944   54859 command_runner.go:130] > Linkmode:       dynamic
	I1209 23:23:43.994948   54859 command_runner.go:130] > BuildTags:      
	I1209 23:23:43.994953   54859 command_runner.go:130] >   containers_image_ostree_stub
	I1209 23:23:43.994957   54859 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1209 23:23:43.994964   54859 command_runner.go:130] >   btrfs_noversion
	I1209 23:23:43.994968   54859 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1209 23:23:43.994975   54859 command_runner.go:130] >   libdm_no_deferred_remove
	I1209 23:23:43.994983   54859 command_runner.go:130] >   seccomp
	I1209 23:23:43.994990   54859 command_runner.go:130] > LDFlags:          unknown
	I1209 23:23:43.994995   54859 command_runner.go:130] > SeccompEnabled:   true
	I1209 23:23:43.995001   54859 command_runner.go:130] > AppArmorEnabled:  false
	I1209 23:23:43.996054   54859 ssh_runner.go:195] Run: crio --version
	I1209 23:23:44.022250   54859 command_runner.go:130] > crio version 1.29.1
	I1209 23:23:44.022270   54859 command_runner.go:130] > Version:        1.29.1
	I1209 23:23:44.022275   54859 command_runner.go:130] > GitCommit:      unknown
	I1209 23:23:44.022279   54859 command_runner.go:130] > GitCommitDate:  unknown
	I1209 23:23:44.022289   54859 command_runner.go:130] > GitTreeState:   clean
	I1209 23:23:44.022295   54859 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1209 23:23:44.022300   54859 command_runner.go:130] > GoVersion:      go1.21.6
	I1209 23:23:44.022304   54859 command_runner.go:130] > Compiler:       gc
	I1209 23:23:44.022308   54859 command_runner.go:130] > Platform:       linux/amd64
	I1209 23:23:44.022312   54859 command_runner.go:130] > Linkmode:       dynamic
	I1209 23:23:44.022317   54859 command_runner.go:130] > BuildTags:      
	I1209 23:23:44.022321   54859 command_runner.go:130] >   containers_image_ostree_stub
	I1209 23:23:44.022324   54859 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1209 23:23:44.022328   54859 command_runner.go:130] >   btrfs_noversion
	I1209 23:23:44.022332   54859 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1209 23:23:44.022337   54859 command_runner.go:130] >   libdm_no_deferred_remove
	I1209 23:23:44.022340   54859 command_runner.go:130] >   seccomp
	I1209 23:23:44.022347   54859 command_runner.go:130] > LDFlags:          unknown
	I1209 23:23:44.022351   54859 command_runner.go:130] > SeccompEnabled:   true
	I1209 23:23:44.022355   54859 command_runner.go:130] > AppArmorEnabled:  false
	I1209 23:23:44.026258   54859 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:23:44.027595   54859 main.go:141] libmachine: (multinode-555395) Calling .GetIP
	I1209 23:23:44.030187   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:44.030632   54859 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:23:44.030662   54859 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:23:44.030825   54859 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 23:23:44.034842   54859 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1209 23:23:44.034938   54859 kubeadm.go:883] updating cluster {Name:multinode-555395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-555395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.99 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:23:44.035082   54859 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:23:44.035140   54859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:23:44.073020   54859 command_runner.go:130] > {
	I1209 23:23:44.073041   54859 command_runner.go:130] >   "images": [
	I1209 23:23:44.073048   54859 command_runner.go:130] >     {
	I1209 23:23:44.073058   54859 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1209 23:23:44.073062   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073068   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1209 23:23:44.073071   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073075   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073083   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1209 23:23:44.073090   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1209 23:23:44.073094   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073098   54859 command_runner.go:130] >       "size": "94965812",
	I1209 23:23:44.073103   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.073107   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.073117   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073124   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073128   54859 command_runner.go:130] >     },
	I1209 23:23:44.073134   54859 command_runner.go:130] >     {
	I1209 23:23:44.073140   54859 command_runner.go:130] >       "id": "50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e",
	I1209 23:23:44.073147   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073152   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241108-5c6d2daf"
	I1209 23:23:44.073158   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073162   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073171   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3",
	I1209 23:23:44.073178   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"
	I1209 23:23:44.073184   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073188   54859 command_runner.go:130] >       "size": "94963761",
	I1209 23:23:44.073194   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.073201   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.073207   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073211   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073217   54859 command_runner.go:130] >     },
	I1209 23:23:44.073220   54859 command_runner.go:130] >     {
	I1209 23:23:44.073226   54859 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1209 23:23:44.073230   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073235   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1209 23:23:44.073242   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073246   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073255   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1209 23:23:44.073265   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1209 23:23:44.073268   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073274   54859 command_runner.go:130] >       "size": "1363676",
	I1209 23:23:44.073278   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.073284   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.073289   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073295   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073299   54859 command_runner.go:130] >     },
	I1209 23:23:44.073303   54859 command_runner.go:130] >     {
	I1209 23:23:44.073309   54859 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1209 23:23:44.073315   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073321   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 23:23:44.073334   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073338   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073348   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1209 23:23:44.073361   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1209 23:23:44.073367   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073372   54859 command_runner.go:130] >       "size": "31470524",
	I1209 23:23:44.073378   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.073382   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.073388   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073392   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073398   54859 command_runner.go:130] >     },
	I1209 23:23:44.073402   54859 command_runner.go:130] >     {
	I1209 23:23:44.073411   54859 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1209 23:23:44.073417   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073422   54859 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1209 23:23:44.073428   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073433   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073445   54859 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1209 23:23:44.073455   54859 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1209 23:23:44.073461   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073466   54859 command_runner.go:130] >       "size": "63273227",
	I1209 23:23:44.073472   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.073477   54859 command_runner.go:130] >       "username": "nonroot",
	I1209 23:23:44.073483   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073487   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073493   54859 command_runner.go:130] >     },
	I1209 23:23:44.073496   54859 command_runner.go:130] >     {
	I1209 23:23:44.073505   54859 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1209 23:23:44.073511   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073516   54859 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1209 23:23:44.073522   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073526   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073534   54859 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1209 23:23:44.073543   54859 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1209 23:23:44.073549   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073553   54859 command_runner.go:130] >       "size": "149009664",
	I1209 23:23:44.073560   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.073564   54859 command_runner.go:130] >         "value": "0"
	I1209 23:23:44.073569   54859 command_runner.go:130] >       },
	I1209 23:23:44.073573   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.073576   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073598   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073607   54859 command_runner.go:130] >     },
	I1209 23:23:44.073610   54859 command_runner.go:130] >     {
	I1209 23:23:44.073616   54859 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1209 23:23:44.073623   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073629   54859 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1209 23:23:44.073635   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073639   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073648   54859 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1209 23:23:44.073658   54859 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1209 23:23:44.073665   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073669   54859 command_runner.go:130] >       "size": "95274464",
	I1209 23:23:44.073675   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.073679   54859 command_runner.go:130] >         "value": "0"
	I1209 23:23:44.073684   54859 command_runner.go:130] >       },
	I1209 23:23:44.073690   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.073693   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073701   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073704   54859 command_runner.go:130] >     },
	I1209 23:23:44.073710   54859 command_runner.go:130] >     {
	I1209 23:23:44.073716   54859 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1209 23:23:44.073723   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073728   54859 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1209 23:23:44.073733   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073737   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073753   54859 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1209 23:23:44.073763   54859 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1209 23:23:44.073769   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073774   54859 command_runner.go:130] >       "size": "89474374",
	I1209 23:23:44.073780   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.073784   54859 command_runner.go:130] >         "value": "0"
	I1209 23:23:44.073790   54859 command_runner.go:130] >       },
	I1209 23:23:44.073794   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.073801   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073805   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073808   54859 command_runner.go:130] >     },
	I1209 23:23:44.073811   54859 command_runner.go:130] >     {
	I1209 23:23:44.073817   54859 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1209 23:23:44.073821   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073825   54859 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1209 23:23:44.073828   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073832   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073839   54859 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1209 23:23:44.073846   54859 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1209 23:23:44.073849   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073853   54859 command_runner.go:130] >       "size": "92783513",
	I1209 23:23:44.073857   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.073864   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.073868   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073874   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073877   54859 command_runner.go:130] >     },
	I1209 23:23:44.073881   54859 command_runner.go:130] >     {
	I1209 23:23:44.073887   54859 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1209 23:23:44.073894   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073898   54859 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1209 23:23:44.073904   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073908   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.073917   54859 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1209 23:23:44.073926   54859 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1209 23:23:44.073932   54859 command_runner.go:130] >       ],
	I1209 23:23:44.073936   54859 command_runner.go:130] >       "size": "68457798",
	I1209 23:23:44.073941   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.073945   54859 command_runner.go:130] >         "value": "0"
	I1209 23:23:44.073949   54859 command_runner.go:130] >       },
	I1209 23:23:44.073954   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.073958   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.073965   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.073968   54859 command_runner.go:130] >     },
	I1209 23:23:44.073974   54859 command_runner.go:130] >     {
	I1209 23:23:44.073979   54859 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1209 23:23:44.073985   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.073990   54859 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1209 23:23:44.073995   54859 command_runner.go:130] >       ],
	I1209 23:23:44.074000   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.074009   54859 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1209 23:23:44.074018   54859 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1209 23:23:44.074025   54859 command_runner.go:130] >       ],
	I1209 23:23:44.074029   54859 command_runner.go:130] >       "size": "742080",
	I1209 23:23:44.074035   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.074039   54859 command_runner.go:130] >         "value": "65535"
	I1209 23:23:44.074045   54859 command_runner.go:130] >       },
	I1209 23:23:44.074049   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.074055   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.074059   54859 command_runner.go:130] >       "pinned": true
	I1209 23:23:44.074064   54859 command_runner.go:130] >     }
	I1209 23:23:44.074067   54859 command_runner.go:130] >   ]
	I1209 23:23:44.074071   54859 command_runner.go:130] > }
	I1209 23:23:44.074747   54859 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:23:44.074765   54859 crio.go:433] Images already preloaded, skipping extraction
	I1209 23:23:44.074834   54859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:23:44.104419   54859 command_runner.go:130] > {
	I1209 23:23:44.104438   54859 command_runner.go:130] >   "images": [
	I1209 23:23:44.104444   54859 command_runner.go:130] >     {
	I1209 23:23:44.104452   54859 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1209 23:23:44.104456   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.104461   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1209 23:23:44.104465   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104469   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.104477   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1209 23:23:44.104483   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1209 23:23:44.104487   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104492   54859 command_runner.go:130] >       "size": "94965812",
	I1209 23:23:44.104496   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.104500   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.104504   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.104511   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.104515   54859 command_runner.go:130] >     },
	I1209 23:23:44.104519   54859 command_runner.go:130] >     {
	I1209 23:23:44.104524   54859 command_runner.go:130] >       "id": "50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e",
	I1209 23:23:44.104528   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.104534   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241108-5c6d2daf"
	I1209 23:23:44.104537   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104541   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.104548   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3",
	I1209 23:23:44.104557   54859 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"
	I1209 23:23:44.104561   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104565   54859 command_runner.go:130] >       "size": "94963761",
	I1209 23:23:44.104568   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.104574   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.104578   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.104582   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.104584   54859 command_runner.go:130] >     },
	I1209 23:23:44.104588   54859 command_runner.go:130] >     {
	I1209 23:23:44.104593   54859 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1209 23:23:44.104597   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.104603   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1209 23:23:44.104606   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104613   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.104620   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1209 23:23:44.104627   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1209 23:23:44.104633   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104636   54859 command_runner.go:130] >       "size": "1363676",
	I1209 23:23:44.104640   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.104645   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.104658   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.104665   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.104668   54859 command_runner.go:130] >     },
	I1209 23:23:44.104671   54859 command_runner.go:130] >     {
	I1209 23:23:44.104678   54859 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1209 23:23:44.104681   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.104688   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 23:23:44.104692   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104696   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.104705   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1209 23:23:44.104718   54859 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1209 23:23:44.104724   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104728   54859 command_runner.go:130] >       "size": "31470524",
	I1209 23:23:44.104732   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.104738   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.104743   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.104749   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.104752   54859 command_runner.go:130] >     },
	I1209 23:23:44.104758   54859 command_runner.go:130] >     {
	I1209 23:23:44.104763   54859 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1209 23:23:44.104770   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.104775   54859 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1209 23:23:44.104781   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104785   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.104795   54859 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1209 23:23:44.104804   54859 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1209 23:23:44.104810   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104814   54859 command_runner.go:130] >       "size": "63273227",
	I1209 23:23:44.104820   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.104824   54859 command_runner.go:130] >       "username": "nonroot",
	I1209 23:23:44.104828   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.104834   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.104837   54859 command_runner.go:130] >     },
	I1209 23:23:44.104842   54859 command_runner.go:130] >     {
	I1209 23:23:44.104848   54859 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1209 23:23:44.104854   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.104859   54859 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1209 23:23:44.104865   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104868   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.104877   54859 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1209 23:23:44.104886   54859 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1209 23:23:44.104892   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104897   54859 command_runner.go:130] >       "size": "149009664",
	I1209 23:23:44.104903   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.104907   54859 command_runner.go:130] >         "value": "0"
	I1209 23:23:44.104916   54859 command_runner.go:130] >       },
	I1209 23:23:44.104922   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.104926   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.104930   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.104934   54859 command_runner.go:130] >     },
	I1209 23:23:44.104937   54859 command_runner.go:130] >     {
	I1209 23:23:44.104945   54859 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1209 23:23:44.104952   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.104956   54859 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1209 23:23:44.104962   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104967   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.104976   54859 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1209 23:23:44.104986   54859 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1209 23:23:44.104991   54859 command_runner.go:130] >       ],
	I1209 23:23:44.104996   54859 command_runner.go:130] >       "size": "95274464",
	I1209 23:23:44.105002   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.105006   54859 command_runner.go:130] >         "value": "0"
	I1209 23:23:44.105012   54859 command_runner.go:130] >       },
	I1209 23:23:44.105015   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.105022   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.105026   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.105029   54859 command_runner.go:130] >     },
	I1209 23:23:44.105032   54859 command_runner.go:130] >     {
	I1209 23:23:44.105038   54859 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1209 23:23:44.105044   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.105049   54859 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1209 23:23:44.105055   54859 command_runner.go:130] >       ],
	I1209 23:23:44.105059   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.105075   54859 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1209 23:23:44.105086   54859 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1209 23:23:44.105092   54859 command_runner.go:130] >       ],
	I1209 23:23:44.105097   54859 command_runner.go:130] >       "size": "89474374",
	I1209 23:23:44.105104   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.105107   54859 command_runner.go:130] >         "value": "0"
	I1209 23:23:44.105113   54859 command_runner.go:130] >       },
	I1209 23:23:44.105118   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.105124   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.105127   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.105133   54859 command_runner.go:130] >     },
	I1209 23:23:44.105137   54859 command_runner.go:130] >     {
	I1209 23:23:44.105143   54859 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1209 23:23:44.105149   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.105153   54859 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1209 23:23:44.105159   54859 command_runner.go:130] >       ],
	I1209 23:23:44.105163   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.105173   54859 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1209 23:23:44.105185   54859 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1209 23:23:44.105191   54859 command_runner.go:130] >       ],
	I1209 23:23:44.105195   54859 command_runner.go:130] >       "size": "92783513",
	I1209 23:23:44.105201   54859 command_runner.go:130] >       "uid": null,
	I1209 23:23:44.105204   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.105209   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.105213   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.105218   54859 command_runner.go:130] >     },
	I1209 23:23:44.105221   54859 command_runner.go:130] >     {
	I1209 23:23:44.105229   54859 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1209 23:23:44.105235   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.105239   54859 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1209 23:23:44.105245   54859 command_runner.go:130] >       ],
	I1209 23:23:44.105249   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.105258   54859 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1209 23:23:44.105267   54859 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1209 23:23:44.105273   54859 command_runner.go:130] >       ],
	I1209 23:23:44.105277   54859 command_runner.go:130] >       "size": "68457798",
	I1209 23:23:44.105283   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.105287   54859 command_runner.go:130] >         "value": "0"
	I1209 23:23:44.105293   54859 command_runner.go:130] >       },
	I1209 23:23:44.105297   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.105303   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.105307   54859 command_runner.go:130] >       "pinned": false
	I1209 23:23:44.105312   54859 command_runner.go:130] >     },
	I1209 23:23:44.105316   54859 command_runner.go:130] >     {
	I1209 23:23:44.105328   54859 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1209 23:23:44.105334   54859 command_runner.go:130] >       "repoTags": [
	I1209 23:23:44.105339   54859 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1209 23:23:44.105344   54859 command_runner.go:130] >       ],
	I1209 23:23:44.105349   54859 command_runner.go:130] >       "repoDigests": [
	I1209 23:23:44.105358   54859 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1209 23:23:44.105367   54859 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1209 23:23:44.105373   54859 command_runner.go:130] >       ],
	I1209 23:23:44.105377   54859 command_runner.go:130] >       "size": "742080",
	I1209 23:23:44.105383   54859 command_runner.go:130] >       "uid": {
	I1209 23:23:44.105387   54859 command_runner.go:130] >         "value": "65535"
	I1209 23:23:44.105391   54859 command_runner.go:130] >       },
	I1209 23:23:44.105395   54859 command_runner.go:130] >       "username": "",
	I1209 23:23:44.105400   54859 command_runner.go:130] >       "spec": null,
	I1209 23:23:44.105404   54859 command_runner.go:130] >       "pinned": true
	I1209 23:23:44.105410   54859 command_runner.go:130] >     }
	I1209 23:23:44.105415   54859 command_runner.go:130] >   ]
	I1209 23:23:44.105420   54859 command_runner.go:130] > }
	I1209 23:23:44.105966   54859 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:23:44.105981   54859 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:23:44.105988   54859 kubeadm.go:934] updating node { 192.168.39.48 8443 v1.31.2 crio true true} ...
	I1209 23:23:44.106075   54859 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-555395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-555395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:23:44.106143   54859 ssh_runner.go:195] Run: crio config
	I1209 23:23:44.146374   54859 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1209 23:23:44.146398   54859 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1209 23:23:44.146404   54859 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1209 23:23:44.146407   54859 command_runner.go:130] > #
	I1209 23:23:44.146414   54859 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1209 23:23:44.146422   54859 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1209 23:23:44.146429   54859 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1209 23:23:44.146437   54859 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1209 23:23:44.146441   54859 command_runner.go:130] > # reload'.
	I1209 23:23:44.146447   54859 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1209 23:23:44.146453   54859 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1209 23:23:44.146459   54859 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1209 23:23:44.146465   54859 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1209 23:23:44.146469   54859 command_runner.go:130] > [crio]
	I1209 23:23:44.146475   54859 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1209 23:23:44.146482   54859 command_runner.go:130] > # containers images, in this directory.
	I1209 23:23:44.146626   54859 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1209 23:23:44.146656   54859 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1209 23:23:44.146665   54859 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1209 23:23:44.146678   54859 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1209 23:23:44.146687   54859 command_runner.go:130] > # imagestore = ""
	I1209 23:23:44.146699   54859 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1209 23:23:44.146714   54859 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1209 23:23:44.146724   54859 command_runner.go:130] > storage_driver = "overlay"
	I1209 23:23:44.146738   54859 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1209 23:23:44.146749   54859 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1209 23:23:44.146761   54859 command_runner.go:130] > storage_option = [
	I1209 23:23:44.146769   54859 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1209 23:23:44.146776   54859 command_runner.go:130] > ]
	I1209 23:23:44.146788   54859 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1209 23:23:44.146820   54859 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1209 23:23:44.146833   54859 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1209 23:23:44.146843   54859 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1209 23:23:44.146858   54859 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1209 23:23:44.146867   54859 command_runner.go:130] > # always happen on a node reboot
	I1209 23:23:44.146881   54859 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1209 23:23:44.146901   54859 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1209 23:23:44.146915   54859 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1209 23:23:44.146928   54859 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1209 23:23:44.146938   54859 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1209 23:23:44.146956   54859 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1209 23:23:44.146968   54859 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1209 23:23:44.146978   54859 command_runner.go:130] > # internal_wipe = true
	I1209 23:23:44.146990   54859 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1209 23:23:44.147004   54859 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1209 23:23:44.147015   54859 command_runner.go:130] > # internal_repair = false
	I1209 23:23:44.147026   54859 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1209 23:23:44.147040   54859 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1209 23:23:44.147051   54859 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1209 23:23:44.147058   54859 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1209 23:23:44.147065   54859 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1209 23:23:44.147070   54859 command_runner.go:130] > [crio.api]
	I1209 23:23:44.147076   54859 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1209 23:23:44.147080   54859 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1209 23:23:44.147087   54859 command_runner.go:130] > # IP address on which the stream server will listen.
	I1209 23:23:44.147092   54859 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1209 23:23:44.147099   54859 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1209 23:23:44.147106   54859 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1209 23:23:44.147111   54859 command_runner.go:130] > # stream_port = "0"
	I1209 23:23:44.147118   54859 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1209 23:23:44.147123   54859 command_runner.go:130] > # stream_enable_tls = false
	I1209 23:23:44.147131   54859 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1209 23:23:44.147136   54859 command_runner.go:130] > # stream_idle_timeout = ""
	I1209 23:23:44.147144   54859 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1209 23:23:44.147158   54859 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1209 23:23:44.147169   54859 command_runner.go:130] > # minutes.
	I1209 23:23:44.147177   54859 command_runner.go:130] > # stream_tls_cert = ""
	I1209 23:23:44.147194   54859 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1209 23:23:44.147209   54859 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1209 23:23:44.147220   54859 command_runner.go:130] > # stream_tls_key = ""
	I1209 23:23:44.147235   54859 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1209 23:23:44.147250   54859 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1209 23:23:44.147267   54859 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1209 23:23:44.147279   54859 command_runner.go:130] > # stream_tls_ca = ""
	I1209 23:23:44.147291   54859 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1209 23:23:44.147298   54859 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1209 23:23:44.147306   54859 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1209 23:23:44.147312   54859 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1209 23:23:44.147319   54859 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1209 23:23:44.147332   54859 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1209 23:23:44.147343   54859 command_runner.go:130] > [crio.runtime]
	I1209 23:23:44.147354   54859 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1209 23:23:44.147367   54859 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1209 23:23:44.147375   54859 command_runner.go:130] > # "nofile=1024:2048"
	I1209 23:23:44.147390   54859 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1209 23:23:44.147402   54859 command_runner.go:130] > # default_ulimits = [
	I1209 23:23:44.147410   54859 command_runner.go:130] > # ]
	I1209 23:23:44.147418   54859 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1209 23:23:44.147424   54859 command_runner.go:130] > # no_pivot = false
	I1209 23:23:44.147434   54859 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1209 23:23:44.147448   54859 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1209 23:23:44.147462   54859 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1209 23:23:44.147477   54859 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1209 23:23:44.147486   54859 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1209 23:23:44.147498   54859 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1209 23:23:44.147517   54859 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1209 23:23:44.147530   54859 command_runner.go:130] > # Cgroup setting for conmon
	I1209 23:23:44.147542   54859 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1209 23:23:44.147553   54859 command_runner.go:130] > conmon_cgroup = "pod"
	I1209 23:23:44.147581   54859 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1209 23:23:44.147595   54859 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1209 23:23:44.147610   54859 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1209 23:23:44.147619   54859 command_runner.go:130] > conmon_env = [
	I1209 23:23:44.147634   54859 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1209 23:23:44.147651   54859 command_runner.go:130] > ]
	I1209 23:23:44.147665   54859 command_runner.go:130] > # Additional environment variables to set for all the
	I1209 23:23:44.147677   54859 command_runner.go:130] > # containers. These are overridden if set in the
	I1209 23:23:44.147690   54859 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1209 23:23:44.147699   54859 command_runner.go:130] > # default_env = [
	I1209 23:23:44.147709   54859 command_runner.go:130] > # ]
	I1209 23:23:44.147721   54859 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1209 23:23:44.147737   54859 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1209 23:23:44.147747   54859 command_runner.go:130] > # selinux = false
	I1209 23:23:44.147759   54859 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1209 23:23:44.147772   54859 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1209 23:23:44.147783   54859 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1209 23:23:44.147794   54859 command_runner.go:130] > # seccomp_profile = ""
	I1209 23:23:44.147806   54859 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1209 23:23:44.147819   54859 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1209 23:23:44.147834   54859 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1209 23:23:44.147846   54859 command_runner.go:130] > # which might increase security.
	I1209 23:23:44.147858   54859 command_runner.go:130] > # This option is currently deprecated,
	I1209 23:23:44.147869   54859 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1209 23:23:44.147881   54859 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1209 23:23:44.147897   54859 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1209 23:23:44.147910   54859 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1209 23:23:44.147921   54859 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1209 23:23:44.147936   54859 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1209 23:23:44.147949   54859 command_runner.go:130] > # This option supports live configuration reload.
	I1209 23:23:44.147960   54859 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1209 23:23:44.147971   54859 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1209 23:23:44.147982   54859 command_runner.go:130] > # the cgroup blockio controller.
	I1209 23:23:44.147991   54859 command_runner.go:130] > # blockio_config_file = ""
	I1209 23:23:44.148006   54859 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1209 23:23:44.148016   54859 command_runner.go:130] > # blockio parameters.
	I1209 23:23:44.148025   54859 command_runner.go:130] > # blockio_reload = false
	I1209 23:23:44.148040   54859 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1209 23:23:44.148051   54859 command_runner.go:130] > # irqbalance daemon.
	I1209 23:23:44.148062   54859 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1209 23:23:44.148077   54859 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1209 23:23:44.148093   54859 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1209 23:23:44.148108   54859 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1209 23:23:44.148127   54859 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1209 23:23:44.148142   54859 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1209 23:23:44.148154   54859 command_runner.go:130] > # This option supports live configuration reload.
	I1209 23:23:44.148162   54859 command_runner.go:130] > # rdt_config_file = ""
	I1209 23:23:44.148178   54859 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1209 23:23:44.148187   54859 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1209 23:23:44.148211   54859 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1209 23:23:44.148224   54859 command_runner.go:130] > # separate_pull_cgroup = ""
	I1209 23:23:44.148238   54859 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1209 23:23:44.148248   54859 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1209 23:23:44.148259   54859 command_runner.go:130] > # will be added.
	I1209 23:23:44.148268   54859 command_runner.go:130] > # default_capabilities = [
	I1209 23:23:44.148277   54859 command_runner.go:130] > # 	"CHOWN",
	I1209 23:23:44.148283   54859 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1209 23:23:44.148287   54859 command_runner.go:130] > # 	"FSETID",
	I1209 23:23:44.148291   54859 command_runner.go:130] > # 	"FOWNER",
	I1209 23:23:44.148296   54859 command_runner.go:130] > # 	"SETGID",
	I1209 23:23:44.148302   54859 command_runner.go:130] > # 	"SETUID",
	I1209 23:23:44.148306   54859 command_runner.go:130] > # 	"SETPCAP",
	I1209 23:23:44.148310   54859 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1209 23:23:44.148315   54859 command_runner.go:130] > # 	"KILL",
	I1209 23:23:44.148318   54859 command_runner.go:130] > # ]
	I1209 23:23:44.148325   54859 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1209 23:23:44.148334   54859 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1209 23:23:44.148339   54859 command_runner.go:130] > # add_inheritable_capabilities = false
	I1209 23:23:44.148347   54859 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1209 23:23:44.148353   54859 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1209 23:23:44.148356   54859 command_runner.go:130] > default_sysctls = [
	I1209 23:23:44.148361   54859 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1209 23:23:44.148370   54859 command_runner.go:130] > ]
	I1209 23:23:44.148378   54859 command_runner.go:130] > # List of devices on the host that a
	I1209 23:23:44.148391   54859 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1209 23:23:44.148406   54859 command_runner.go:130] > # allowed_devices = [
	I1209 23:23:44.148423   54859 command_runner.go:130] > # 	"/dev/fuse",
	I1209 23:23:44.148434   54859 command_runner.go:130] > # ]
	I1209 23:23:44.148448   54859 command_runner.go:130] > # List of additional devices. specified as
	I1209 23:23:44.148464   54859 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1209 23:23:44.148477   54859 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1209 23:23:44.148491   54859 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1209 23:23:44.148503   54859 command_runner.go:130] > # additional_devices = [
	I1209 23:23:44.148518   54859 command_runner.go:130] > # ]
	I1209 23:23:44.148531   54859 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1209 23:23:44.148547   54859 command_runner.go:130] > # cdi_spec_dirs = [
	I1209 23:23:44.148558   54859 command_runner.go:130] > # 	"/etc/cdi",
	I1209 23:23:44.148566   54859 command_runner.go:130] > # 	"/var/run/cdi",
	I1209 23:23:44.148575   54859 command_runner.go:130] > # ]
	I1209 23:23:44.148585   54859 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1209 23:23:44.148598   54859 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1209 23:23:44.148609   54859 command_runner.go:130] > # Defaults to false.
	I1209 23:23:44.148620   54859 command_runner.go:130] > # device_ownership_from_security_context = false
	I1209 23:23:44.148636   54859 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1209 23:23:44.148650   54859 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1209 23:23:44.148658   54859 command_runner.go:130] > # hooks_dir = [
	I1209 23:23:44.148670   54859 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1209 23:23:44.148675   54859 command_runner.go:130] > # ]
	I1209 23:23:44.148685   54859 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1209 23:23:44.148699   54859 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1209 23:23:44.148713   54859 command_runner.go:130] > # its default mounts from the following two files:
	I1209 23:23:44.148724   54859 command_runner.go:130] > #
	I1209 23:23:44.148737   54859 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1209 23:23:44.148752   54859 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1209 23:23:44.148766   54859 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1209 23:23:44.148772   54859 command_runner.go:130] > #
	I1209 23:23:44.148789   54859 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1209 23:23:44.148802   54859 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1209 23:23:44.148817   54859 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1209 23:23:44.148831   54859 command_runner.go:130] > #      only add mounts it finds in this file.
	I1209 23:23:44.148842   54859 command_runner.go:130] > #
	I1209 23:23:44.148851   54859 command_runner.go:130] > # default_mounts_file = ""
	I1209 23:23:44.148864   54859 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1209 23:23:44.148876   54859 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1209 23:23:44.148886   54859 command_runner.go:130] > pids_limit = 1024
	I1209 23:23:44.148894   54859 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1209 23:23:44.148907   54859 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1209 23:23:44.148923   54859 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1209 23:23:44.148941   54859 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1209 23:23:44.148953   54859 command_runner.go:130] > # log_size_max = -1
	I1209 23:23:44.148964   54859 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1209 23:23:44.148976   54859 command_runner.go:130] > # log_to_journald = false
	I1209 23:23:44.148987   54859 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1209 23:23:44.148999   54859 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1209 23:23:44.149019   54859 command_runner.go:130] > # Path to directory for container attach sockets.
	I1209 23:23:44.149031   54859 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1209 23:23:44.149045   54859 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1209 23:23:44.149056   54859 command_runner.go:130] > # bind_mount_prefix = ""
	I1209 23:23:44.149070   54859 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1209 23:23:44.149081   54859 command_runner.go:130] > # read_only = false
	I1209 23:23:44.149093   54859 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1209 23:23:44.149107   54859 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1209 23:23:44.149119   54859 command_runner.go:130] > # live configuration reload.
	I1209 23:23:44.149127   54859 command_runner.go:130] > # log_level = "info"
	I1209 23:23:44.149138   54859 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1209 23:23:44.149150   54859 command_runner.go:130] > # This option supports live configuration reload.
	I1209 23:23:44.149160   54859 command_runner.go:130] > # log_filter = ""
	I1209 23:23:44.149174   54859 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1209 23:23:44.149189   54859 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1209 23:23:44.149202   54859 command_runner.go:130] > # separated by comma.
	I1209 23:23:44.149218   54859 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 23:23:44.149231   54859 command_runner.go:130] > # uid_mappings = ""
	I1209 23:23:44.149245   54859 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1209 23:23:44.149263   54859 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1209 23:23:44.149274   54859 command_runner.go:130] > # separated by comma.
	I1209 23:23:44.149317   54859 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 23:23:44.149342   54859 command_runner.go:130] > # gid_mappings = ""
	I1209 23:23:44.149350   54859 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1209 23:23:44.149366   54859 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1209 23:23:44.149377   54859 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1209 23:23:44.149389   54859 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 23:23:44.149399   54859 command_runner.go:130] > # minimum_mappable_uid = -1
	I1209 23:23:44.149407   54859 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1209 23:23:44.149423   54859 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1209 23:23:44.149435   54859 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1209 23:23:44.149447   54859 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 23:23:44.149458   54859 command_runner.go:130] > # minimum_mappable_gid = -1
	I1209 23:23:44.149468   54859 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1209 23:23:44.149481   54859 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1209 23:23:44.149493   54859 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1209 23:23:44.149512   54859 command_runner.go:130] > # ctr_stop_timeout = 30
	I1209 23:23:44.149524   54859 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1209 23:23:44.149533   54859 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1209 23:23:44.149545   54859 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1209 23:23:44.149552   54859 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1209 23:23:44.149563   54859 command_runner.go:130] > drop_infra_ctr = false
	I1209 23:23:44.149572   54859 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1209 23:23:44.149584   54859 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1209 23:23:44.149599   54859 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1209 23:23:44.149609   54859 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1209 23:23:44.149621   54859 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1209 23:23:44.149633   54859 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1209 23:23:44.149645   54859 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1209 23:23:44.149657   54859 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1209 23:23:44.149667   54859 command_runner.go:130] > # shared_cpuset = ""
	I1209 23:23:44.149677   54859 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1209 23:23:44.149693   54859 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1209 23:23:44.149703   54859 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1209 23:23:44.149714   54859 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1209 23:23:44.149725   54859 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1209 23:23:44.149734   54859 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1209 23:23:44.149747   54859 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1209 23:23:44.149757   54859 command_runner.go:130] > # enable_criu_support = false
	I1209 23:23:44.149766   54859 command_runner.go:130] > # Enable/disable the generation of the container,
	I1209 23:23:44.149778   54859 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1209 23:23:44.149788   54859 command_runner.go:130] > # enable_pod_events = false
	I1209 23:23:44.149801   54859 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1209 23:23:44.149815   54859 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1209 23:23:44.149826   54859 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1209 23:23:44.149833   54859 command_runner.go:130] > # default_runtime = "runc"
	I1209 23:23:44.149845   54859 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1209 23:23:44.149862   54859 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1209 23:23:44.149874   54859 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1209 23:23:44.149881   54859 command_runner.go:130] > # creation as a file is not desired either.
	I1209 23:23:44.149890   54859 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1209 23:23:44.149899   54859 command_runner.go:130] > # the hostname is being managed dynamically.
	I1209 23:23:44.149904   54859 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1209 23:23:44.149907   54859 command_runner.go:130] > # ]
	I1209 23:23:44.149913   54859 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1209 23:23:44.149921   54859 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1209 23:23:44.149927   54859 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1209 23:23:44.149932   54859 command_runner.go:130] > # Each entry in the table should follow the format:
	I1209 23:23:44.149935   54859 command_runner.go:130] > #
	I1209 23:23:44.149940   54859 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1209 23:23:44.149945   54859 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1209 23:23:44.149967   54859 command_runner.go:130] > # runtime_type = "oci"
	I1209 23:23:44.149977   54859 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1209 23:23:44.149985   54859 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1209 23:23:44.149995   54859 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1209 23:23:44.150003   54859 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1209 23:23:44.150013   54859 command_runner.go:130] > # monitor_env = []
	I1209 23:23:44.150021   54859 command_runner.go:130] > # privileged_without_host_devices = false
	I1209 23:23:44.150031   54859 command_runner.go:130] > # allowed_annotations = []
	I1209 23:23:44.150040   54859 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1209 23:23:44.150049   54859 command_runner.go:130] > # Where:
	I1209 23:23:44.150060   54859 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1209 23:23:44.150073   54859 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1209 23:23:44.150086   54859 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1209 23:23:44.150098   54859 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1209 23:23:44.150107   54859 command_runner.go:130] > #   in $PATH.
	I1209 23:23:44.150117   54859 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1209 23:23:44.150126   54859 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1209 23:23:44.150132   54859 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1209 23:23:44.150138   54859 command_runner.go:130] > #   state.
	I1209 23:23:44.150144   54859 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1209 23:23:44.150152   54859 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1209 23:23:44.150158   54859 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1209 23:23:44.150166   54859 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1209 23:23:44.150172   54859 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1209 23:23:44.150180   54859 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1209 23:23:44.150184   54859 command_runner.go:130] > #   The currently recognized values are:
	I1209 23:23:44.150193   54859 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1209 23:23:44.150199   54859 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1209 23:23:44.150209   54859 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1209 23:23:44.150215   54859 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1209 23:23:44.150224   54859 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1209 23:23:44.150230   54859 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1209 23:23:44.150238   54859 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1209 23:23:44.150245   54859 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1209 23:23:44.150253   54859 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1209 23:23:44.150258   54859 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1209 23:23:44.150263   54859 command_runner.go:130] > #   deprecated option "conmon".
	I1209 23:23:44.150270   54859 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1209 23:23:44.150277   54859 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1209 23:23:44.150283   54859 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1209 23:23:44.150294   54859 command_runner.go:130] > #   should be moved to the container's cgroup
	I1209 23:23:44.150300   54859 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1209 23:23:44.150308   54859 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1209 23:23:44.150313   54859 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1209 23:23:44.150318   54859 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1209 23:23:44.150323   54859 command_runner.go:130] > #
	I1209 23:23:44.150328   54859 command_runner.go:130] > # Using the seccomp notifier feature:
	I1209 23:23:44.150333   54859 command_runner.go:130] > #
	I1209 23:23:44.150339   54859 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1209 23:23:44.150347   54859 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1209 23:23:44.150351   54859 command_runner.go:130] > #
	I1209 23:23:44.150356   54859 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1209 23:23:44.150364   54859 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1209 23:23:44.150367   54859 command_runner.go:130] > #
	I1209 23:23:44.150373   54859 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1209 23:23:44.150379   54859 command_runner.go:130] > # feature.
	I1209 23:23:44.150382   54859 command_runner.go:130] > #
	I1209 23:23:44.150387   54859 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1209 23:23:44.150395   54859 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1209 23:23:44.150401   54859 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1209 23:23:44.150409   54859 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1209 23:23:44.150415   54859 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1209 23:23:44.150418   54859 command_runner.go:130] > #
	I1209 23:23:44.150424   54859 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1209 23:23:44.150433   54859 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1209 23:23:44.150439   54859 command_runner.go:130] > #
	I1209 23:23:44.150445   54859 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1209 23:23:44.150453   54859 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1209 23:23:44.150456   54859 command_runner.go:130] > #
	I1209 23:23:44.150461   54859 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1209 23:23:44.150469   54859 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1209 23:23:44.150473   54859 command_runner.go:130] > # limitation.
	I1209 23:23:44.150478   54859 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1209 23:23:44.150482   54859 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1209 23:23:44.150488   54859 command_runner.go:130] > runtime_type = "oci"
	I1209 23:23:44.150492   54859 command_runner.go:130] > runtime_root = "/run/runc"
	I1209 23:23:44.150496   54859 command_runner.go:130] > runtime_config_path = ""
	I1209 23:23:44.150502   54859 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1209 23:23:44.150508   54859 command_runner.go:130] > monitor_cgroup = "pod"
	I1209 23:23:44.150514   54859 command_runner.go:130] > monitor_exec_cgroup = ""
	I1209 23:23:44.150518   54859 command_runner.go:130] > monitor_env = [
	I1209 23:23:44.150525   54859 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1209 23:23:44.150528   54859 command_runner.go:130] > ]
	I1209 23:23:44.150535   54859 command_runner.go:130] > privileged_without_host_devices = false
	I1209 23:23:44.150544   54859 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1209 23:23:44.150552   54859 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1209 23:23:44.150558   54859 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1209 23:23:44.150567   54859 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1209 23:23:44.150574   54859 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1209 23:23:44.150581   54859 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1209 23:23:44.150590   54859 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1209 23:23:44.150599   54859 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1209 23:23:44.150604   54859 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1209 23:23:44.150613   54859 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1209 23:23:44.150619   54859 command_runner.go:130] > # Example:
	I1209 23:23:44.150623   54859 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1209 23:23:44.150628   54859 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1209 23:23:44.150632   54859 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1209 23:23:44.150637   54859 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1209 23:23:44.150641   54859 command_runner.go:130] > # cpuset = 0
	I1209 23:23:44.150645   54859 command_runner.go:130] > # cpushares = "0-1"
	I1209 23:23:44.150648   54859 command_runner.go:130] > # Where:
	I1209 23:23:44.150654   54859 command_runner.go:130] > # The workload name is workload-type.
	I1209 23:23:44.150660   54859 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1209 23:23:44.150665   54859 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1209 23:23:44.150670   54859 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1209 23:23:44.150676   54859 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1209 23:23:44.150681   54859 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1209 23:23:44.150686   54859 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1209 23:23:44.150691   54859 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1209 23:23:44.150695   54859 command_runner.go:130] > # Default value is set to true
	I1209 23:23:44.150699   54859 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1209 23:23:44.150704   54859 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1209 23:23:44.150708   54859 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1209 23:23:44.150712   54859 command_runner.go:130] > # Default value is set to 'false'
	I1209 23:23:44.150715   54859 command_runner.go:130] > # disable_hostport_mapping = false
	I1209 23:23:44.150721   54859 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1209 23:23:44.150724   54859 command_runner.go:130] > #
	I1209 23:23:44.150729   54859 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1209 23:23:44.150735   54859 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1209 23:23:44.150741   54859 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1209 23:23:44.150749   54859 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1209 23:23:44.150754   54859 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1209 23:23:44.150760   54859 command_runner.go:130] > [crio.image]
	I1209 23:23:44.150766   54859 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1209 23:23:44.150772   54859 command_runner.go:130] > # default_transport = "docker://"
	I1209 23:23:44.150778   54859 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1209 23:23:44.150786   54859 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1209 23:23:44.150790   54859 command_runner.go:130] > # global_auth_file = ""
	I1209 23:23:44.150795   54859 command_runner.go:130] > # The image used to instantiate infra containers.
	I1209 23:23:44.150803   54859 command_runner.go:130] > # This option supports live configuration reload.
	I1209 23:23:44.150808   54859 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1209 23:23:44.150817   54859 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1209 23:23:44.150823   54859 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1209 23:23:44.150830   54859 command_runner.go:130] > # This option supports live configuration reload.
	I1209 23:23:44.150834   54859 command_runner.go:130] > # pause_image_auth_file = ""
	I1209 23:23:44.150842   54859 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1209 23:23:44.150847   54859 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1209 23:23:44.150857   54859 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1209 23:23:44.150865   54859 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1209 23:23:44.150871   54859 command_runner.go:130] > # pause_command = "/pause"
	I1209 23:23:44.150877   54859 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1209 23:23:44.150885   54859 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1209 23:23:44.150890   54859 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1209 23:23:44.150897   54859 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1209 23:23:44.150903   54859 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1209 23:23:44.150911   54859 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1209 23:23:44.150917   54859 command_runner.go:130] > # pinned_images = [
	I1209 23:23:44.150920   54859 command_runner.go:130] > # ]
	I1209 23:23:44.150926   54859 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1209 23:23:44.150934   54859 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1209 23:23:44.150944   54859 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1209 23:23:44.150951   54859 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1209 23:23:44.150959   54859 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1209 23:23:44.150963   54859 command_runner.go:130] > # signature_policy = ""
	I1209 23:23:44.150970   54859 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1209 23:23:44.150976   54859 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1209 23:23:44.150985   54859 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1209 23:23:44.150991   54859 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1209 23:23:44.150998   54859 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1209 23:23:44.151003   54859 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1209 23:23:44.151011   54859 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1209 23:23:44.151017   54859 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1209 23:23:44.151023   54859 command_runner.go:130] > # changing them here.
	I1209 23:23:44.151027   54859 command_runner.go:130] > # insecure_registries = [
	I1209 23:23:44.151033   54859 command_runner.go:130] > # ]
	I1209 23:23:44.151039   54859 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1209 23:23:44.151046   54859 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1209 23:23:44.151050   54859 command_runner.go:130] > # image_volumes = "mkdir"
	I1209 23:23:44.151055   54859 command_runner.go:130] > # Temporary directory to use for storing big files
	I1209 23:23:44.151062   54859 command_runner.go:130] > # big_files_temporary_dir = ""
	I1209 23:23:44.151068   54859 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1209 23:23:44.151074   54859 command_runner.go:130] > # CNI plugins.
	I1209 23:23:44.151078   54859 command_runner.go:130] > [crio.network]
	I1209 23:23:44.151083   54859 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1209 23:23:44.151092   54859 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1209 23:23:44.151098   54859 command_runner.go:130] > # cni_default_network = ""
	I1209 23:23:44.151104   54859 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1209 23:23:44.151110   54859 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1209 23:23:44.151115   54859 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1209 23:23:44.151121   54859 command_runner.go:130] > # plugin_dirs = [
	I1209 23:23:44.151125   54859 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1209 23:23:44.151131   54859 command_runner.go:130] > # ]
	I1209 23:23:44.151137   54859 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1209 23:23:44.151143   54859 command_runner.go:130] > [crio.metrics]
	I1209 23:23:44.151148   54859 command_runner.go:130] > # Globally enable or disable metrics support.
	I1209 23:23:44.151154   54859 command_runner.go:130] > enable_metrics = true
	I1209 23:23:44.151159   54859 command_runner.go:130] > # Specify enabled metrics collectors.
	I1209 23:23:44.151165   54859 command_runner.go:130] > # Per default all metrics are enabled.
	I1209 23:23:44.151171   54859 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1209 23:23:44.151180   54859 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1209 23:23:44.151188   54859 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1209 23:23:44.151194   54859 command_runner.go:130] > # metrics_collectors = [
	I1209 23:23:44.151200   54859 command_runner.go:130] > # 	"operations",
	I1209 23:23:44.151204   54859 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1209 23:23:44.151211   54859 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1209 23:23:44.151215   54859 command_runner.go:130] > # 	"operations_errors",
	I1209 23:23:44.151221   54859 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1209 23:23:44.151226   54859 command_runner.go:130] > # 	"image_pulls_by_name",
	I1209 23:23:44.151233   54859 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1209 23:23:44.151237   54859 command_runner.go:130] > # 	"image_pulls_failures",
	I1209 23:23:44.151242   54859 command_runner.go:130] > # 	"image_pulls_successes",
	I1209 23:23:44.151246   54859 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1209 23:23:44.151250   54859 command_runner.go:130] > # 	"image_layer_reuse",
	I1209 23:23:44.151254   54859 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1209 23:23:44.151260   54859 command_runner.go:130] > # 	"containers_oom_total",
	I1209 23:23:44.151264   54859 command_runner.go:130] > # 	"containers_oom",
	I1209 23:23:44.151268   54859 command_runner.go:130] > # 	"processes_defunct",
	I1209 23:23:44.151274   54859 command_runner.go:130] > # 	"operations_total",
	I1209 23:23:44.151277   54859 command_runner.go:130] > # 	"operations_latency_seconds",
	I1209 23:23:44.151282   54859 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1209 23:23:44.151299   54859 command_runner.go:130] > # 	"operations_errors_total",
	I1209 23:23:44.151306   54859 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1209 23:23:44.151310   54859 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1209 23:23:44.151316   54859 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1209 23:23:44.151320   54859 command_runner.go:130] > # 	"image_pulls_success_total",
	I1209 23:23:44.151326   54859 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1209 23:23:44.151330   54859 command_runner.go:130] > # 	"containers_oom_count_total",
	I1209 23:23:44.151337   54859 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1209 23:23:44.151341   54859 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1209 23:23:44.151349   54859 command_runner.go:130] > # ]
	I1209 23:23:44.151356   54859 command_runner.go:130] > # The port on which the metrics server will listen.
	I1209 23:23:44.151360   54859 command_runner.go:130] > # metrics_port = 9090
	I1209 23:23:44.151367   54859 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1209 23:23:44.151371   54859 command_runner.go:130] > # metrics_socket = ""
	I1209 23:23:44.151375   54859 command_runner.go:130] > # The certificate for the secure metrics server.
	I1209 23:23:44.151383   54859 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1209 23:23:44.151389   54859 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1209 23:23:44.151396   54859 command_runner.go:130] > # certificate on any modification event.
	I1209 23:23:44.151401   54859 command_runner.go:130] > # metrics_cert = ""
	I1209 23:23:44.151408   54859 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1209 23:23:44.151416   54859 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1209 23:23:44.151420   54859 command_runner.go:130] > # metrics_key = ""
	I1209 23:23:44.151427   54859 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1209 23:23:44.151431   54859 command_runner.go:130] > [crio.tracing]
	I1209 23:23:44.151436   54859 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1209 23:23:44.151442   54859 command_runner.go:130] > # enable_tracing = false
	I1209 23:23:44.151448   54859 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1209 23:23:44.151454   54859 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1209 23:23:44.151460   54859 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1209 23:23:44.151467   54859 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1209 23:23:44.151471   54859 command_runner.go:130] > # CRI-O NRI configuration.
	I1209 23:23:44.151479   54859 command_runner.go:130] > [crio.nri]
	I1209 23:23:44.151484   54859 command_runner.go:130] > # Globally enable or disable NRI.
	I1209 23:23:44.151490   54859 command_runner.go:130] > # enable_nri = false
	I1209 23:23:44.151494   54859 command_runner.go:130] > # NRI socket to listen on.
	I1209 23:23:44.151501   54859 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1209 23:23:44.151505   54859 command_runner.go:130] > # NRI plugin directory to use.
	I1209 23:23:44.151512   54859 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1209 23:23:44.151517   54859 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1209 23:23:44.151524   54859 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1209 23:23:44.151529   54859 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1209 23:23:44.151536   54859 command_runner.go:130] > # nri_disable_connections = false
	I1209 23:23:44.151542   54859 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1209 23:23:44.151549   54859 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1209 23:23:44.151554   54859 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1209 23:23:44.151573   54859 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1209 23:23:44.151584   54859 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1209 23:23:44.151593   54859 command_runner.go:130] > [crio.stats]
	I1209 23:23:44.151599   54859 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1209 23:23:44.151606   54859 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1209 23:23:44.151611   54859 command_runner.go:130] > # stats_collection_period = 0
	I1209 23:23:44.151977   54859 command_runner.go:130] ! time="2024-12-09 23:23:44.118567274Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1209 23:23:44.152006   54859 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1209 23:23:44.152101   54859 cni.go:84] Creating CNI manager for ""
	I1209 23:23:44.152112   54859 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1209 23:23:44.152120   54859 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:23:44.152140   54859 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.48 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-555395 NodeName:multinode-555395 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:23:44.152246   54859 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-555395"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.48"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.48"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:23:44.152322   54859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:23:44.162016   54859 command_runner.go:130] > kubeadm
	I1209 23:23:44.162124   54859 command_runner.go:130] > kubectl
	I1209 23:23:44.162136   54859 command_runner.go:130] > kubelet
	I1209 23:23:44.162160   54859 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:23:44.162209   54859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:23:44.170972   54859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1209 23:23:44.187310   54859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:23:44.202741   54859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1209 23:23:44.218882   54859 ssh_runner.go:195] Run: grep 192.168.39.48	control-plane.minikube.internal$ /etc/hosts
	I1209 23:23:44.222602   54859 command_runner.go:130] > 192.168.39.48	control-plane.minikube.internal
	I1209 23:23:44.222662   54859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:23:44.364188   54859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:23:44.378381   54859 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395 for IP: 192.168.39.48
	I1209 23:23:44.378400   54859 certs.go:194] generating shared ca certs ...
	I1209 23:23:44.378417   54859 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:23:44.378576   54859 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:23:44.378618   54859 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:23:44.378627   54859 certs.go:256] generating profile certs ...
	I1209 23:23:44.378732   54859 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/client.key
	I1209 23:23:44.378790   54859 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/apiserver.key.44de7fae
	I1209 23:23:44.378820   54859 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/proxy-client.key
	I1209 23:23:44.378828   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 23:23:44.378841   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 23:23:44.378853   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 23:23:44.378865   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 23:23:44.378879   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 23:23:44.378891   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 23:23:44.378904   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 23:23:44.378919   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 23:23:44.378964   54859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:23:44.378989   54859 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:23:44.378999   54859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:23:44.379027   54859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:23:44.379050   54859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:23:44.379070   54859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:23:44.379108   54859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:23:44.379133   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:23:44.379146   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem -> /usr/share/ca-certificates/26253.pem
	I1209 23:23:44.379158   54859 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> /usr/share/ca-certificates/262532.pem
	I1209 23:23:44.379767   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:23:44.402672   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:23:44.425249   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:23:44.448241   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:23:44.471457   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 23:23:44.494441   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 23:23:44.517282   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:23:44.540207   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/multinode-555395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 23:23:44.564410   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:23:44.587614   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:23:44.610807   54859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:23:44.633934   54859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:23:44.650348   54859 ssh_runner.go:195] Run: openssl version
	I1209 23:23:44.655688   54859 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1209 23:23:44.655764   54859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:23:44.666103   54859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:23:44.670224   54859 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:23:44.670253   54859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:23:44.670291   54859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:23:44.675578   54859 command_runner.go:130] > b5213941
	I1209 23:23:44.675650   54859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:23:44.684973   54859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:23:44.695517   54859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:23:44.700007   54859 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:23:44.700047   54859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:23:44.700098   54859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:23:44.705369   54859 command_runner.go:130] > 51391683
	I1209 23:23:44.705653   54859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:23:44.714771   54859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:23:44.725054   54859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:23:44.729360   54859 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:23:44.729446   54859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:23:44.729506   54859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:23:44.734993   54859 command_runner.go:130] > 3ec20f2e
	I1209 23:23:44.735068   54859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:23:44.744259   54859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:23:44.748576   54859 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:23:44.748603   54859 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1209 23:23:44.748609   54859 command_runner.go:130] > Device: 253,1	Inode: 2103342     Links: 1
	I1209 23:23:44.748615   54859 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 23:23:44.748621   54859 command_runner.go:130] > Access: 2024-12-09 23:17:04.148672813 +0000
	I1209 23:23:44.748625   54859 command_runner.go:130] > Modify: 2024-12-09 23:17:04.148672813 +0000
	I1209 23:23:44.748630   54859 command_runner.go:130] > Change: 2024-12-09 23:17:04.148672813 +0000
	I1209 23:23:44.748634   54859 command_runner.go:130] >  Birth: 2024-12-09 23:17:04.148672813 +0000
	I1209 23:23:44.748747   54859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:23:44.754153   54859 command_runner.go:130] > Certificate will not expire
	I1209 23:23:44.754215   54859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:23:44.759590   54859 command_runner.go:130] > Certificate will not expire
	I1209 23:23:44.759655   54859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:23:44.764924   54859 command_runner.go:130] > Certificate will not expire
	I1209 23:23:44.765044   54859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:23:44.770622   54859 command_runner.go:130] > Certificate will not expire
	I1209 23:23:44.770685   54859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:23:44.776302   54859 command_runner.go:130] > Certificate will not expire
	I1209 23:23:44.776357   54859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:23:44.781987   54859 command_runner.go:130] > Certificate will not expire
	I1209 23:23:44.782185   54859 kubeadm.go:392] StartCluster: {Name:multinode-555395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-555395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.99 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:23:44.782293   54859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:23:44.782332   54859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:23:44.816639   54859 command_runner.go:130] > c99c37247d46e67e7e3c175977b1a7a5ba72c2f9e8be6c93c316073beb8b032e
	I1209 23:23:44.816668   54859 command_runner.go:130] > 7bac2bd12d4ffacf75cbd5a00d9280ccfc1a1c1807e3f86a0c2aa7400908cf64
	I1209 23:23:44.816674   54859 command_runner.go:130] > a94a2e0e32a100747e56aef38dc31c3dfdbc3c578cc30581623ccec551c3fcca
	I1209 23:23:44.816681   54859 command_runner.go:130] > fed5163426ab16376195c7072f36efa6c0bce7f8a8175a2e77f24480eca551d0
	I1209 23:23:44.816686   54859 command_runner.go:130] > c6c47bc106b079e6cce87013ca67f3a03a7bc882f443116ec7bee46ffd42fa85
	I1209 23:23:44.816691   54859 command_runner.go:130] > f8c195b1f78b47fcd63aa8d06fa74e569778d428e61743248aa7bd367a262022
	I1209 23:23:44.816697   54859 command_runner.go:130] > 0665312a47f2878d2be0a2909cd95c8fd738cc5f6cebf3d426c3a78611cfccea
	I1209 23:23:44.816703   54859 command_runner.go:130] > cd931d3954579a9b381c2e73ddd63c6c2fafa6f777f2d704d1b38b37ef58b6f8
	I1209 23:23:44.817994   54859 cri.go:89] found id: "c99c37247d46e67e7e3c175977b1a7a5ba72c2f9e8be6c93c316073beb8b032e"
	I1209 23:23:44.818009   54859 cri.go:89] found id: "7bac2bd12d4ffacf75cbd5a00d9280ccfc1a1c1807e3f86a0c2aa7400908cf64"
	I1209 23:23:44.818012   54859 cri.go:89] found id: "a94a2e0e32a100747e56aef38dc31c3dfdbc3c578cc30581623ccec551c3fcca"
	I1209 23:23:44.818015   54859 cri.go:89] found id: "fed5163426ab16376195c7072f36efa6c0bce7f8a8175a2e77f24480eca551d0"
	I1209 23:23:44.818017   54859 cri.go:89] found id: "c6c47bc106b079e6cce87013ca67f3a03a7bc882f443116ec7bee46ffd42fa85"
	I1209 23:23:44.818021   54859 cri.go:89] found id: "f8c195b1f78b47fcd63aa8d06fa74e569778d428e61743248aa7bd367a262022"
	I1209 23:23:44.818023   54859 cri.go:89] found id: "0665312a47f2878d2be0a2909cd95c8fd738cc5f6cebf3d426c3a78611cfccea"
	I1209 23:23:44.818026   54859 cri.go:89] found id: "cd931d3954579a9b381c2e73ddd63c6c2fafa6f777f2d704d1b38b37ef58b6f8"
	I1209 23:23:44.818028   54859 cri.go:89] found id: ""
	I1209 23:23:44.818067   54859 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-555395 -n multinode-555395
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-555395 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.26s)

                                                
                                    
x
+
TestPreload (267.37s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-199590 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1209 23:33:12.522330   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-199590 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m4.941903414s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-199590 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-199590 image pull gcr.io/k8s-minikube/busybox: (2.511234399s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-199590
E1209 23:35:09.406882   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:35:26.333441   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-199590: exit status 82 (2m0.461471789s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-199590"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-199590 failed: exit status 82
panic.go:629: *** TestPreload FAILED at 2024-12-09 23:36:18.62719045 +0000 UTC m=+3863.058805555
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-199590 -n test-preload-199590
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-199590 -n test-preload-199590: exit status 3 (18.552744369s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 23:36:37.175863   59747 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.193:22: connect: no route to host
	E1209 23:36:37.175884   59747 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.193:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-199590" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-199590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-199590
--- FAIL: TestPreload (267.37s)

                                                
                                    
x
+
TestKubernetesUpgrade (525.65s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-996806 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-996806 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m52.84880701s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-996806] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-996806" primary control-plane node in "kubernetes-upgrade-996806" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:38:32.378194   60836 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:38:32.378317   60836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:38:32.378329   60836 out.go:358] Setting ErrFile to fd 2...
	I1209 23:38:32.378337   60836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:38:32.378634   60836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 23:38:32.379460   60836 out.go:352] Setting JSON to false
	I1209 23:38:32.380727   60836 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8463,"bootTime":1733779049,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:38:32.380818   60836 start.go:139] virtualization: kvm guest
	I1209 23:38:32.382312   60836 out.go:177] * [kubernetes-upgrade-996806] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:38:32.383984   60836 notify.go:220] Checking for updates...
	I1209 23:38:32.383994   60836 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:38:32.385717   60836 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:38:32.387400   60836 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:38:32.388700   60836 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 23:38:32.389868   60836 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:38:32.390911   60836 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:38:32.392309   60836 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:38:32.426737   60836 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 23:38:32.428123   60836 start.go:297] selected driver: kvm2
	I1209 23:38:32.428141   60836 start.go:901] validating driver "kvm2" against <nil>
	I1209 23:38:32.428157   60836 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:38:32.429292   60836 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:38:32.443623   60836 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 23:38:32.463767   60836 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 23:38:32.463827   60836 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:38:32.464125   60836 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 23:38:32.464152   60836 cni.go:84] Creating CNI manager for ""
	I1209 23:38:32.464201   60836 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:38:32.464215   60836 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 23:38:32.464299   60836 start.go:340] cluster config:
	{Name:kubernetes-upgrade-996806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-996806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:38:32.464399   60836 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:38:32.467191   60836 out.go:177] * Starting "kubernetes-upgrade-996806" primary control-plane node in "kubernetes-upgrade-996806" cluster
	I1209 23:38:32.468519   60836 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:38:32.468578   60836 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1209 23:38:32.468605   60836 cache.go:56] Caching tarball of preloaded images
	I1209 23:38:32.468738   60836 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 23:38:32.468755   60836 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1209 23:38:32.469135   60836 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/config.json ...
	I1209 23:38:32.469169   60836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/config.json: {Name:mk46aaf478d8f2e725ec5e68de70d8030aa35b44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:38:32.469407   60836 start.go:360] acquireMachinesLock for kubernetes-upgrade-996806: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:38:58.579971   60836 start.go:364] duration metric: took 26.110518383s to acquireMachinesLock for "kubernetes-upgrade-996806"
	I1209 23:38:58.580051   60836 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-996806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-996806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:38:58.580157   60836 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 23:38:58.582409   60836 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 23:38:58.582567   60836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:38:58.582606   60836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:38:58.599616   60836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I1209 23:38:58.599986   60836 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:38:58.600470   60836 main.go:141] libmachine: Using API Version  1
	I1209 23:38:58.600494   60836 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:38:58.600849   60836 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:38:58.601034   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetMachineName
	I1209 23:38:58.601203   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .DriverName
	I1209 23:38:58.601367   60836 start.go:159] libmachine.API.Create for "kubernetes-upgrade-996806" (driver="kvm2")
	I1209 23:38:58.601392   60836 client.go:168] LocalClient.Create starting
	I1209 23:38:58.601428   60836 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem
	I1209 23:38:58.601471   60836 main.go:141] libmachine: Decoding PEM data...
	I1209 23:38:58.601491   60836 main.go:141] libmachine: Parsing certificate...
	I1209 23:38:58.601580   60836 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem
	I1209 23:38:58.601610   60836 main.go:141] libmachine: Decoding PEM data...
	I1209 23:38:58.601632   60836 main.go:141] libmachine: Parsing certificate...
	I1209 23:38:58.601657   60836 main.go:141] libmachine: Running pre-create checks...
	I1209 23:38:58.601670   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .PreCreateCheck
	I1209 23:38:58.602066   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetConfigRaw
	I1209 23:38:58.602514   60836 main.go:141] libmachine: Creating machine...
	I1209 23:38:58.602530   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .Create
	I1209 23:38:58.602674   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Creating KVM machine...
	I1209 23:38:58.603785   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found existing default KVM network
	I1209 23:38:58.604773   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:38:58.604585   63354 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b6:3a:f4} reservation:<nil>}
	I1209 23:38:58.605455   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:38:58.605395   63354 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000204e70}
	I1209 23:38:58.605507   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | created network xml: 
	I1209 23:38:58.605531   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | <network>
	I1209 23:38:58.605560   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG |   <name>mk-kubernetes-upgrade-996806</name>
	I1209 23:38:58.605575   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG |   <dns enable='no'/>
	I1209 23:38:58.605584   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG |   
	I1209 23:38:58.605592   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1209 23:38:58.605608   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG |     <dhcp>
	I1209 23:38:58.605619   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1209 23:38:58.605632   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG |     </dhcp>
	I1209 23:38:58.605641   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG |   </ip>
	I1209 23:38:58.605648   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG |   
	I1209 23:38:58.605658   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | </network>
	I1209 23:38:58.605667   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | 
	I1209 23:38:58.610504   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | trying to create private KVM network mk-kubernetes-upgrade-996806 192.168.50.0/24...
	I1209 23:38:58.678890   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Setting up store path in /home/jenkins/minikube-integration/19888-18950/.minikube/machines/kubernetes-upgrade-996806 ...
	I1209 23:38:58.678922   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | private KVM network mk-kubernetes-upgrade-996806 192.168.50.0/24 created
	I1209 23:38:58.678938   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Building disk image from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 23:38:58.678996   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:38:58.678840   63354 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 23:38:58.679062   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Downloading /home/jenkins/minikube-integration/19888-18950/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 23:38:58.929443   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:38:58.929326   63354 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/kubernetes-upgrade-996806/id_rsa...
	I1209 23:38:59.070098   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:38:59.069966   63354 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/kubernetes-upgrade-996806/kubernetes-upgrade-996806.rawdisk...
	I1209 23:38:59.070124   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | Writing magic tar header
	I1209 23:38:59.070138   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | Writing SSH key tar header
	I1209 23:38:59.070145   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:38:59.070084   63354 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/kubernetes-upgrade-996806 ...
	I1209 23:38:59.070219   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/kubernetes-upgrade-996806
	I1209 23:38:59.070243   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/kubernetes-upgrade-996806 (perms=drwx------)
	I1209 23:38:59.070256   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines
	I1209 23:38:59.070280   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 23:38:59.070296   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950
	I1209 23:38:59.070312   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 23:38:59.070340   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines (perms=drwxr-xr-x)
	I1209 23:38:59.070353   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | Checking permissions on dir: /home/jenkins
	I1209 23:38:59.070370   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | Checking permissions on dir: /home
	I1209 23:38:59.070383   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | Skipping /home - not owner
	I1209 23:38:59.070403   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube (perms=drwxr-xr-x)
	I1209 23:38:59.070422   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950 (perms=drwxrwxr-x)
	I1209 23:38:59.070437   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 23:38:59.070446   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 23:38:59.070461   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Creating domain...
	I1209 23:38:59.071476   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) define libvirt domain using xml: 
	I1209 23:38:59.071508   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) <domain type='kvm'>
	I1209 23:38:59.071519   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)   <name>kubernetes-upgrade-996806</name>
	I1209 23:38:59.071536   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)   <memory unit='MiB'>2200</memory>
	I1209 23:38:59.071548   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)   <vcpu>2</vcpu>
	I1209 23:38:59.071556   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)   <features>
	I1209 23:38:59.071592   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     <acpi/>
	I1209 23:38:59.071600   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     <apic/>
	I1209 23:38:59.071636   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     <pae/>
	I1209 23:38:59.071656   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     
	I1209 23:38:59.071668   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)   </features>
	I1209 23:38:59.071684   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)   <cpu mode='host-passthrough'>
	I1209 23:38:59.071696   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)   
	I1209 23:38:59.071703   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)   </cpu>
	I1209 23:38:59.071715   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)   <os>
	I1209 23:38:59.071725   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     <type>hvm</type>
	I1209 23:38:59.071734   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     <boot dev='cdrom'/>
	I1209 23:38:59.071742   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     <boot dev='hd'/>
	I1209 23:38:59.071752   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     <bootmenu enable='no'/>
	I1209 23:38:59.071766   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)   </os>
	I1209 23:38:59.071779   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)   <devices>
	I1209 23:38:59.071794   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     <disk type='file' device='cdrom'>
	I1209 23:38:59.071811   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/kubernetes-upgrade-996806/boot2docker.iso'/>
	I1209 23:38:59.071822   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)       <target dev='hdc' bus='scsi'/>
	I1209 23:38:59.071828   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)       <readonly/>
	I1209 23:38:59.071854   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     </disk>
	I1209 23:38:59.071868   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     <disk type='file' device='disk'>
	I1209 23:38:59.071880   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 23:38:59.071897   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/kubernetes-upgrade-996806/kubernetes-upgrade-996806.rawdisk'/>
	I1209 23:38:59.071908   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)       <target dev='hda' bus='virtio'/>
	I1209 23:38:59.071915   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     </disk>
	I1209 23:38:59.071929   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     <interface type='network'>
	I1209 23:38:59.071943   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)       <source network='mk-kubernetes-upgrade-996806'/>
	I1209 23:38:59.071955   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)       <model type='virtio'/>
	I1209 23:38:59.071964   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     </interface>
	I1209 23:38:59.071975   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     <interface type='network'>
	I1209 23:38:59.071987   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)       <source network='default'/>
	I1209 23:38:59.072000   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)       <model type='virtio'/>
	I1209 23:38:59.072011   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     </interface>
	I1209 23:38:59.072019   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     <serial type='pty'>
	I1209 23:38:59.072032   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)       <target port='0'/>
	I1209 23:38:59.072042   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     </serial>
	I1209 23:38:59.072050   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     <console type='pty'>
	I1209 23:38:59.072061   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)       <target type='serial' port='0'/>
	I1209 23:38:59.072092   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     </console>
	I1209 23:38:59.072113   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     <rng model='virtio'>
	I1209 23:38:59.072140   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)       <backend model='random'>/dev/random</backend>
	I1209 23:38:59.072161   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     </rng>
	I1209 23:38:59.072176   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     
	I1209 23:38:59.072186   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)     
	I1209 23:38:59.072198   60836 main.go:141] libmachine: (kubernetes-upgrade-996806)   </devices>
	I1209 23:38:59.072207   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) </domain>
	I1209 23:38:59.072218   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) 
	I1209 23:38:59.076152   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:26:08:8f in network default
	I1209 23:38:59.076770   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Ensuring networks are active...
	I1209 23:38:59.076795   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:38:59.077666   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Ensuring network default is active
	I1209 23:38:59.078035   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Ensuring network mk-kubernetes-upgrade-996806 is active
	I1209 23:38:59.078596   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Getting domain xml...
	I1209 23:38:59.079331   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Creating domain...
	I1209 23:39:00.389940   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Waiting to get IP...
	I1209 23:39:00.390993   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:00.391510   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | unable to find current IP address of domain kubernetes-upgrade-996806 in network mk-kubernetes-upgrade-996806
	I1209 23:39:00.391602   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:39:00.391511   63354 retry.go:31] will retry after 245.816022ms: waiting for machine to come up
	I1209 23:39:00.639268   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:00.639829   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | unable to find current IP address of domain kubernetes-upgrade-996806 in network mk-kubernetes-upgrade-996806
	I1209 23:39:00.639864   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:39:00.639776   63354 retry.go:31] will retry after 249.620782ms: waiting for machine to come up
	I1209 23:39:00.891521   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:00.892037   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | unable to find current IP address of domain kubernetes-upgrade-996806 in network mk-kubernetes-upgrade-996806
	I1209 23:39:00.892064   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:39:00.892003   63354 retry.go:31] will retry after 316.104479ms: waiting for machine to come up
	I1209 23:39:01.209517   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:01.209948   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | unable to find current IP address of domain kubernetes-upgrade-996806 in network mk-kubernetes-upgrade-996806
	I1209 23:39:01.209978   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:39:01.209899   63354 retry.go:31] will retry after 459.872361ms: waiting for machine to come up
	I1209 23:39:01.671851   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:01.672362   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | unable to find current IP address of domain kubernetes-upgrade-996806 in network mk-kubernetes-upgrade-996806
	I1209 23:39:01.672390   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:39:01.672321   63354 retry.go:31] will retry after 537.935267ms: waiting for machine to come up
	I1209 23:39:02.212263   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:02.212747   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | unable to find current IP address of domain kubernetes-upgrade-996806 in network mk-kubernetes-upgrade-996806
	I1209 23:39:02.212774   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:39:02.212693   63354 retry.go:31] will retry after 813.360651ms: waiting for machine to come up
	I1209 23:39:03.027850   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:03.028421   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | unable to find current IP address of domain kubernetes-upgrade-996806 in network mk-kubernetes-upgrade-996806
	I1209 23:39:03.028447   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:39:03.028360   63354 retry.go:31] will retry after 1.059490352s: waiting for machine to come up
	I1209 23:39:04.089802   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:04.090256   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | unable to find current IP address of domain kubernetes-upgrade-996806 in network mk-kubernetes-upgrade-996806
	I1209 23:39:04.090279   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:39:04.090208   63354 retry.go:31] will retry after 1.083869861s: waiting for machine to come up
	I1209 23:39:05.175348   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:05.175809   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | unable to find current IP address of domain kubernetes-upgrade-996806 in network mk-kubernetes-upgrade-996806
	I1209 23:39:05.175845   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:39:05.175775   63354 retry.go:31] will retry after 1.298766691s: waiting for machine to come up
	I1209 23:39:06.475848   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:06.476239   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | unable to find current IP address of domain kubernetes-upgrade-996806 in network mk-kubernetes-upgrade-996806
	I1209 23:39:06.476260   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:39:06.476191   63354 retry.go:31] will retry after 1.711133789s: waiting for machine to come up
	I1209 23:39:08.190353   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:08.190849   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | unable to find current IP address of domain kubernetes-upgrade-996806 in network mk-kubernetes-upgrade-996806
	I1209 23:39:08.190879   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:39:08.190792   63354 retry.go:31] will retry after 1.835777371s: waiting for machine to come up
	I1209 23:39:10.028032   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:10.028494   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | unable to find current IP address of domain kubernetes-upgrade-996806 in network mk-kubernetes-upgrade-996806
	I1209 23:39:10.028525   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:39:10.028415   63354 retry.go:31] will retry after 2.520531662s: waiting for machine to come up
	I1209 23:39:12.552129   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:12.552587   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | unable to find current IP address of domain kubernetes-upgrade-996806 in network mk-kubernetes-upgrade-996806
	I1209 23:39:12.552616   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:39:12.552526   63354 retry.go:31] will retry after 3.123099251s: waiting for machine to come up
	I1209 23:39:15.677617   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:15.678047   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | unable to find current IP address of domain kubernetes-upgrade-996806 in network mk-kubernetes-upgrade-996806
	I1209 23:39:15.678075   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | I1209 23:39:15.678005   63354 retry.go:31] will retry after 3.475008859s: waiting for machine to come up
	I1209 23:39:19.154426   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:19.154871   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Found IP for machine: 192.168.50.55
	I1209 23:39:19.154906   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has current primary IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:19.154917   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Reserving static IP address...
	I1209 23:39:19.155197   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-996806", mac: "52:54:00:e4:8f:d6", ip: "192.168.50.55"} in network mk-kubernetes-upgrade-996806
	I1209 23:39:19.230952   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | Getting to WaitForSSH function...
	I1209 23:39:19.230986   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Reserved static IP address: 192.168.50.55
	I1209 23:39:19.231000   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Waiting for SSH to be available...
	I1209 23:39:19.233950   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:19.234301   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:d6", ip: ""} in network mk-kubernetes-upgrade-996806: {Iface:virbr2 ExpiryTime:2024-12-10 00:39:13 +0000 UTC Type:0 Mac:52:54:00:e4:8f:d6 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e4:8f:d6}
	I1209 23:39:19.234334   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:19.234490   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | Using SSH client type: external
	I1209 23:39:19.234521   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/kubernetes-upgrade-996806/id_rsa (-rw-------)
	I1209 23:39:19.234552   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/kubernetes-upgrade-996806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:39:19.234566   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | About to run SSH command:
	I1209 23:39:19.234585   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | exit 0
	I1209 23:39:19.355738   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | SSH cmd err, output: <nil>: 
	I1209 23:39:19.356030   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) KVM machine creation complete!
	I1209 23:39:19.356363   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetConfigRaw
	I1209 23:39:19.356930   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .DriverName
	I1209 23:39:19.357163   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .DriverName
	I1209 23:39:19.357374   60836 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 23:39:19.357386   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetState
	I1209 23:39:19.358856   60836 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 23:39:19.358872   60836 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 23:39:19.358879   60836 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 23:39:19.358888   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHHostname
	I1209 23:39:19.361383   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:19.361846   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:d6", ip: ""} in network mk-kubernetes-upgrade-996806: {Iface:virbr2 ExpiryTime:2024-12-10 00:39:13 +0000 UTC Type:0 Mac:52:54:00:e4:8f:d6 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-996806 Clientid:01:52:54:00:e4:8f:d6}
	I1209 23:39:19.361877   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:19.362053   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHPort
	I1209 23:39:19.362247   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHKeyPath
	I1209 23:39:19.362412   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHKeyPath
	I1209 23:39:19.362561   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHUsername
	I1209 23:39:19.362730   60836 main.go:141] libmachine: Using SSH client type: native
	I1209 23:39:19.362940   60836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I1209 23:39:19.362952   60836 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 23:39:19.458893   60836 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:39:19.458923   60836 main.go:141] libmachine: Detecting the provisioner...
	I1209 23:39:19.458934   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHHostname
	I1209 23:39:19.461786   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:19.462183   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:d6", ip: ""} in network mk-kubernetes-upgrade-996806: {Iface:virbr2 ExpiryTime:2024-12-10 00:39:13 +0000 UTC Type:0 Mac:52:54:00:e4:8f:d6 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-996806 Clientid:01:52:54:00:e4:8f:d6}
	I1209 23:39:19.462211   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:19.462359   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHPort
	I1209 23:39:19.462552   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHKeyPath
	I1209 23:39:19.462707   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHKeyPath
	I1209 23:39:19.462852   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHUsername
	I1209 23:39:19.463022   60836 main.go:141] libmachine: Using SSH client type: native
	I1209 23:39:19.463201   60836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I1209 23:39:19.463214   60836 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 23:39:19.561203   60836 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 23:39:19.561279   60836 main.go:141] libmachine: found compatible host: buildroot
	I1209 23:39:19.561290   60836 main.go:141] libmachine: Provisioning with buildroot...
	I1209 23:39:19.561312   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetMachineName
	I1209 23:39:19.561542   60836 buildroot.go:166] provisioning hostname "kubernetes-upgrade-996806"
	I1209 23:39:19.561577   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetMachineName
	I1209 23:39:19.561772   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHHostname
	I1209 23:39:19.565080   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:19.565497   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:d6", ip: ""} in network mk-kubernetes-upgrade-996806: {Iface:virbr2 ExpiryTime:2024-12-10 00:39:13 +0000 UTC Type:0 Mac:52:54:00:e4:8f:d6 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-996806 Clientid:01:52:54:00:e4:8f:d6}
	I1209 23:39:19.565518   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:19.565710   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHPort
	I1209 23:39:19.565854   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHKeyPath
	I1209 23:39:19.566013   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHKeyPath
	I1209 23:39:19.566137   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHUsername
	I1209 23:39:19.566333   60836 main.go:141] libmachine: Using SSH client type: native
	I1209 23:39:19.566581   60836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I1209 23:39:19.566602   60836 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-996806 && echo "kubernetes-upgrade-996806" | sudo tee /etc/hostname
	I1209 23:39:19.678566   60836 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-996806
	
	I1209 23:39:19.678601   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHHostname
	I1209 23:39:19.681374   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:19.681672   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:d6", ip: ""} in network mk-kubernetes-upgrade-996806: {Iface:virbr2 ExpiryTime:2024-12-10 00:39:13 +0000 UTC Type:0 Mac:52:54:00:e4:8f:d6 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-996806 Clientid:01:52:54:00:e4:8f:d6}
	I1209 23:39:19.681707   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:19.681905   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHPort
	I1209 23:39:19.682080   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHKeyPath
	I1209 23:39:19.682250   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHKeyPath
	I1209 23:39:19.682394   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHUsername
	I1209 23:39:19.682617   60836 main.go:141] libmachine: Using SSH client type: native
	I1209 23:39:19.682875   60836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I1209 23:39:19.682908   60836 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-996806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-996806/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-996806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:39:19.791246   60836 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:39:19.791276   60836 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:39:19.791297   60836 buildroot.go:174] setting up certificates
	I1209 23:39:19.791309   60836 provision.go:84] configureAuth start
	I1209 23:39:19.791318   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetMachineName
	I1209 23:39:19.791628   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetIP
	I1209 23:39:19.794283   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:19.794667   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:d6", ip: ""} in network mk-kubernetes-upgrade-996806: {Iface:virbr2 ExpiryTime:2024-12-10 00:39:13 +0000 UTC Type:0 Mac:52:54:00:e4:8f:d6 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-996806 Clientid:01:52:54:00:e4:8f:d6}
	I1209 23:39:19.794695   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:19.794869   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHHostname
	I1209 23:39:19.797180   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:19.797543   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:d6", ip: ""} in network mk-kubernetes-upgrade-996806: {Iface:virbr2 ExpiryTime:2024-12-10 00:39:13 +0000 UTC Type:0 Mac:52:54:00:e4:8f:d6 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-996806 Clientid:01:52:54:00:e4:8f:d6}
	I1209 23:39:19.797574   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:19.797747   60836 provision.go:143] copyHostCerts
	I1209 23:39:19.797812   60836 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:39:19.797835   60836 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:39:19.797921   60836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:39:19.798030   60836 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:39:19.798042   60836 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:39:19.798072   60836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:39:19.798143   60836 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:39:19.798153   60836 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:39:19.798182   60836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:39:19.798251   60836 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-996806 san=[127.0.0.1 192.168.50.55 kubernetes-upgrade-996806 localhost minikube]
	I1209 23:39:19.920985   60836 provision.go:177] copyRemoteCerts
	I1209 23:39:19.921049   60836 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:39:19.921072   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHHostname
	I1209 23:39:19.923632   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:19.923961   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:d6", ip: ""} in network mk-kubernetes-upgrade-996806: {Iface:virbr2 ExpiryTime:2024-12-10 00:39:13 +0000 UTC Type:0 Mac:52:54:00:e4:8f:d6 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-996806 Clientid:01:52:54:00:e4:8f:d6}
	I1209 23:39:19.924002   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:19.924112   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHPort
	I1209 23:39:19.924239   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHKeyPath
	I1209 23:39:19.924361   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHUsername
	I1209 23:39:19.924543   60836 sshutil.go:53] new ssh client: &{IP:192.168.50.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/kubernetes-upgrade-996806/id_rsa Username:docker}
	I1209 23:39:20.009007   60836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:39:20.035017   60836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1209 23:39:20.059312   60836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 23:39:20.085588   60836 provision.go:87] duration metric: took 294.264519ms to configureAuth
	I1209 23:39:20.085621   60836 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:39:20.085863   60836 config.go:182] Loaded profile config "kubernetes-upgrade-996806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 23:39:20.085965   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHHostname
	I1209 23:39:20.088628   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:20.088962   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:d6", ip: ""} in network mk-kubernetes-upgrade-996806: {Iface:virbr2 ExpiryTime:2024-12-10 00:39:13 +0000 UTC Type:0 Mac:52:54:00:e4:8f:d6 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-996806 Clientid:01:52:54:00:e4:8f:d6}
	I1209 23:39:20.088995   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:20.089197   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHPort
	I1209 23:39:20.089425   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHKeyPath
	I1209 23:39:20.089596   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHKeyPath
	I1209 23:39:20.089744   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHUsername
	I1209 23:39:20.089899   60836 main.go:141] libmachine: Using SSH client type: native
	I1209 23:39:20.090051   60836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I1209 23:39:20.090065   60836 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:39:20.333029   60836 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:39:20.333067   60836 main.go:141] libmachine: Checking connection to Docker...
	I1209 23:39:20.333078   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetURL
	I1209 23:39:20.334635   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | Using libvirt version 6000000
	I1209 23:39:20.337277   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:20.337677   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:d6", ip: ""} in network mk-kubernetes-upgrade-996806: {Iface:virbr2 ExpiryTime:2024-12-10 00:39:13 +0000 UTC Type:0 Mac:52:54:00:e4:8f:d6 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-996806 Clientid:01:52:54:00:e4:8f:d6}
	I1209 23:39:20.337704   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:20.337894   60836 main.go:141] libmachine: Docker is up and running!
	I1209 23:39:20.337919   60836 main.go:141] libmachine: Reticulating splines...
	I1209 23:39:20.337937   60836 client.go:171] duration metric: took 21.736526866s to LocalClient.Create
	I1209 23:39:20.337962   60836 start.go:167] duration metric: took 21.736595741s to libmachine.API.Create "kubernetes-upgrade-996806"
	I1209 23:39:20.337975   60836 start.go:293] postStartSetup for "kubernetes-upgrade-996806" (driver="kvm2")
	I1209 23:39:20.337989   60836 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:39:20.338007   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .DriverName
	I1209 23:39:20.338284   60836 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:39:20.338311   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHHostname
	I1209 23:39:20.340913   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:20.341285   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:d6", ip: ""} in network mk-kubernetes-upgrade-996806: {Iface:virbr2 ExpiryTime:2024-12-10 00:39:13 +0000 UTC Type:0 Mac:52:54:00:e4:8f:d6 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-996806 Clientid:01:52:54:00:e4:8f:d6}
	I1209 23:39:20.341324   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:20.341466   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHPort
	I1209 23:39:20.341666   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHKeyPath
	I1209 23:39:20.341859   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHUsername
	I1209 23:39:20.342025   60836 sshutil.go:53] new ssh client: &{IP:192.168.50.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/kubernetes-upgrade-996806/id_rsa Username:docker}
	I1209 23:39:20.421669   60836 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:39:20.427274   60836 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:39:20.427315   60836 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:39:20.427402   60836 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:39:20.427553   60836 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:39:20.427727   60836 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:39:20.436878   60836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:39:20.462838   60836 start.go:296] duration metric: took 124.842747ms for postStartSetup
	I1209 23:39:20.462901   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetConfigRaw
	I1209 23:39:20.463650   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetIP
	I1209 23:39:20.466875   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:20.467259   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:d6", ip: ""} in network mk-kubernetes-upgrade-996806: {Iface:virbr2 ExpiryTime:2024-12-10 00:39:13 +0000 UTC Type:0 Mac:52:54:00:e4:8f:d6 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-996806 Clientid:01:52:54:00:e4:8f:d6}
	I1209 23:39:20.467289   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:20.467536   60836 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/config.json ...
	I1209 23:39:20.467793   60836 start.go:128] duration metric: took 21.887621696s to createHost
	I1209 23:39:20.467826   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHHostname
	I1209 23:39:20.470595   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:20.470995   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:d6", ip: ""} in network mk-kubernetes-upgrade-996806: {Iface:virbr2 ExpiryTime:2024-12-10 00:39:13 +0000 UTC Type:0 Mac:52:54:00:e4:8f:d6 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-996806 Clientid:01:52:54:00:e4:8f:d6}
	I1209 23:39:20.471022   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:20.471225   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHPort
	I1209 23:39:20.471410   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHKeyPath
	I1209 23:39:20.471599   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHKeyPath
	I1209 23:39:20.471752   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHUsername
	I1209 23:39:20.471938   60836 main.go:141] libmachine: Using SSH client type: native
	I1209 23:39:20.472121   60836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I1209 23:39:20.472134   60836 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:39:20.573366   60836 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733787560.545404201
	
	I1209 23:39:20.573387   60836 fix.go:216] guest clock: 1733787560.545404201
	I1209 23:39:20.573396   60836 fix.go:229] Guest: 2024-12-09 23:39:20.545404201 +0000 UTC Remote: 2024-12-09 23:39:20.467810408 +0000 UTC m=+48.145304299 (delta=77.593793ms)
	I1209 23:39:20.573416   60836 fix.go:200] guest clock delta is within tolerance: 77.593793ms
	I1209 23:39:20.573421   60836 start.go:83] releasing machines lock for "kubernetes-upgrade-996806", held for 21.993410099s
	I1209 23:39:20.573445   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .DriverName
	I1209 23:39:20.573735   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetIP
	I1209 23:39:20.576911   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:20.577318   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:d6", ip: ""} in network mk-kubernetes-upgrade-996806: {Iface:virbr2 ExpiryTime:2024-12-10 00:39:13 +0000 UTC Type:0 Mac:52:54:00:e4:8f:d6 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-996806 Clientid:01:52:54:00:e4:8f:d6}
	I1209 23:39:20.577344   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:20.577539   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .DriverName
	I1209 23:39:20.578144   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .DriverName
	I1209 23:39:20.578332   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .DriverName
	I1209 23:39:20.578432   60836 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:39:20.578487   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHHostname
	I1209 23:39:20.578567   60836 ssh_runner.go:195] Run: cat /version.json
	I1209 23:39:20.578599   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHHostname
	I1209 23:39:20.581685   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:20.582016   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:20.582050   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:d6", ip: ""} in network mk-kubernetes-upgrade-996806: {Iface:virbr2 ExpiryTime:2024-12-10 00:39:13 +0000 UTC Type:0 Mac:52:54:00:e4:8f:d6 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-996806 Clientid:01:52:54:00:e4:8f:d6}
	I1209 23:39:20.582092   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:20.582223   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHPort
	I1209 23:39:20.582390   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHKeyPath
	I1209 23:39:20.582554   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHUsername
	I1209 23:39:20.582632   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:d6", ip: ""} in network mk-kubernetes-upgrade-996806: {Iface:virbr2 ExpiryTime:2024-12-10 00:39:13 +0000 UTC Type:0 Mac:52:54:00:e4:8f:d6 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-996806 Clientid:01:52:54:00:e4:8f:d6}
	I1209 23:39:20.582653   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:20.582776   60836 sshutil.go:53] new ssh client: &{IP:192.168.50.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/kubernetes-upgrade-996806/id_rsa Username:docker}
	I1209 23:39:20.582857   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHPort
	I1209 23:39:20.583034   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHKeyPath
	I1209 23:39:20.583210   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHUsername
	I1209 23:39:20.583387   60836 sshutil.go:53] new ssh client: &{IP:192.168.50.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/kubernetes-upgrade-996806/id_rsa Username:docker}
	I1209 23:39:20.685546   60836 ssh_runner.go:195] Run: systemctl --version
	I1209 23:39:20.693507   60836 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:39:20.863788   60836 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:39:20.872019   60836 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:39:20.872114   60836 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:39:20.896406   60836 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:39:20.896435   60836 start.go:495] detecting cgroup driver to use...
	I1209 23:39:20.896509   60836 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:39:20.912507   60836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:39:20.933693   60836 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:39:20.933759   60836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:39:20.952445   60836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:39:20.970464   60836 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:39:21.101038   60836 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:39:21.271055   60836 docker.go:233] disabling docker service ...
	I1209 23:39:21.271126   60836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:39:21.287995   60836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:39:21.303376   60836 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:39:21.474162   60836 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:39:21.587904   60836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:39:21.601462   60836 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:39:21.620161   60836 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1209 23:39:21.620230   60836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:39:21.630214   60836 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:39:21.630285   60836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:39:21.639897   60836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:39:21.649296   60836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:39:21.659201   60836 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:39:21.672054   60836 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:39:21.683491   60836 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:39:21.683548   60836 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:39:21.696467   60836 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:39:21.706329   60836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:39:21.839226   60836 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:39:21.939807   60836 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:39:21.939876   60836 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:39:21.945265   60836 start.go:563] Will wait 60s for crictl version
	I1209 23:39:21.945335   60836 ssh_runner.go:195] Run: which crictl
	I1209 23:39:21.949178   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:39:21.988556   60836 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:39:21.988643   60836 ssh_runner.go:195] Run: crio --version
	I1209 23:39:22.015887   60836 ssh_runner.go:195] Run: crio --version
	I1209 23:39:22.043663   60836 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1209 23:39:22.045174   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetIP
	I1209 23:39:22.047995   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:22.048393   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:d6", ip: ""} in network mk-kubernetes-upgrade-996806: {Iface:virbr2 ExpiryTime:2024-12-10 00:39:13 +0000 UTC Type:0 Mac:52:54:00:e4:8f:d6 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-996806 Clientid:01:52:54:00:e4:8f:d6}
	I1209 23:39:22.048424   60836 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:39:22.048573   60836 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1209 23:39:22.052823   60836 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:39:22.065496   60836 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-996806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-996806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.55 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:39:22.065617   60836 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:39:22.065679   60836 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:39:22.099952   60836 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 23:39:22.100018   60836 ssh_runner.go:195] Run: which lz4
	I1209 23:39:22.104545   60836 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:39:22.110177   60836 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:39:22.110209   60836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1209 23:39:23.710657   60836 crio.go:462] duration metric: took 1.606155831s to copy over tarball
	I1209 23:39:23.710732   60836 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:39:26.346057   60836 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.635299615s)
	I1209 23:39:26.346082   60836 crio.go:469] duration metric: took 2.63539965s to extract the tarball
	I1209 23:39:26.346089   60836 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:39:26.391324   60836 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:39:26.435405   60836 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 23:39:26.435439   60836 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 23:39:26.435538   60836 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:39:26.435577   60836 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:39:26.435599   60836 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1209 23:39:26.435538   60836 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:39:26.435580   60836 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:39:26.435537   60836 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:39:26.435586   60836 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:39:26.435588   60836 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1209 23:39:26.437175   60836 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:39:26.437199   60836 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:39:26.437222   60836 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:39:26.437174   60836 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:39:26.437176   60836 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:39:26.437295   60836 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:39:26.437175   60836 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1209 23:39:26.437179   60836 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1209 23:39:26.585836   60836 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:39:26.591913   60836 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1209 23:39:26.593382   60836 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:39:26.599324   60836 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:39:26.603230   60836 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1209 23:39:26.623044   60836 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1209 23:39:26.633088   60836 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:39:26.659704   60836 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1209 23:39:26.659766   60836 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:39:26.659829   60836 ssh_runner.go:195] Run: which crictl
	I1209 23:39:26.705260   60836 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1209 23:39:26.705306   60836 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1209 23:39:26.705353   60836 ssh_runner.go:195] Run: which crictl
	I1209 23:39:26.747723   60836 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1209 23:39:26.747755   60836 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1209 23:39:26.747780   60836 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:39:26.747789   60836 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:39:26.747798   60836 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1209 23:39:26.747831   60836 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:39:26.747835   60836 ssh_runner.go:195] Run: which crictl
	I1209 23:39:26.747870   60836 ssh_runner.go:195] Run: which crictl
	I1209 23:39:26.747836   60836 ssh_runner.go:195] Run: which crictl
	I1209 23:39:26.761262   60836 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1209 23:39:26.761313   60836 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1209 23:39:26.761367   60836 ssh_runner.go:195] Run: which crictl
	I1209 23:39:26.762021   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:39:26.762067   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:39:26.762089   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:39:26.762021   60836 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1209 23:39:26.762166   60836 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:39:26.762198   60836 ssh_runner.go:195] Run: which crictl
	I1209 23:39:26.762091   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:39:26.762130   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:39:26.766559   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:39:26.901443   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:39:26.901538   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:39:26.901575   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:39:26.901538   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:39:26.901651   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:39:26.901656   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:39:26.901700   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:39:27.048525   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:39:27.048596   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:39:27.048625   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:39:27.048691   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:39:27.048691   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:39:27.048751   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:39:27.048786   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:39:27.224952   60836 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1209 23:39:27.225012   60836 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1209 23:39:27.225048   60836 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:39:27.225110   60836 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1209 23:39:27.225110   60836 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1209 23:39:27.225180   60836 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1209 23:39:27.225232   60836 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1209 23:39:27.258259   60836 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1209 23:39:27.346971   60836 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:39:27.486291   60836 cache_images.go:92] duration metric: took 1.050830126s to LoadCachedImages
	W1209 23:39:27.486397   60836 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1209 23:39:27.486418   60836 kubeadm.go:934] updating node { 192.168.50.55 8443 v1.20.0 crio true true} ...
	I1209 23:39:27.486519   60836 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-996806 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-996806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:39:27.486591   60836 ssh_runner.go:195] Run: crio config
	I1209 23:39:27.528975   60836 cni.go:84] Creating CNI manager for ""
	I1209 23:39:27.529003   60836 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:39:27.529013   60836 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:39:27.529037   60836 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.55 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-996806 NodeName:kubernetes-upgrade-996806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 23:39:27.529194   60836 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-996806"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.55
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.55"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:39:27.529277   60836 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 23:39:27.538623   60836 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:39:27.538681   60836 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:39:27.547689   60836 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1209 23:39:27.563753   60836 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:39:27.579615   60836 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1209 23:39:27.595617   60836 ssh_runner.go:195] Run: grep 192.168.50.55	control-plane.minikube.internal$ /etc/hosts
	I1209 23:39:27.599232   60836 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.55	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:39:27.611605   60836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:39:27.737497   60836 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:39:27.753841   60836 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806 for IP: 192.168.50.55
	I1209 23:39:27.753862   60836 certs.go:194] generating shared ca certs ...
	I1209 23:39:27.753883   60836 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:39:27.754038   60836 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:39:27.754090   60836 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:39:27.754104   60836 certs.go:256] generating profile certs ...
	I1209 23:39:27.754187   60836 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/client.key
	I1209 23:39:27.754204   60836 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/client.crt with IP's: []
	I1209 23:39:27.806599   60836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/client.crt ...
	I1209 23:39:27.806628   60836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/client.crt: {Name:mkfc352d08f374fa3a714f1891270f98fb509c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:39:27.806819   60836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/client.key ...
	I1209 23:39:27.806841   60836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/client.key: {Name:mk6b38902789a8b76b026208660194bc73dcd30f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:39:27.806947   60836 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/apiserver.key.f936ea8a
	I1209 23:39:27.806968   60836 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/apiserver.crt.f936ea8a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.55]
	I1209 23:39:28.026681   60836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/apiserver.crt.f936ea8a ...
	I1209 23:39:28.026711   60836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/apiserver.crt.f936ea8a: {Name:mk1d53cdfd15a744c77fb25548541ff214a89b8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:39:28.026904   60836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/apiserver.key.f936ea8a ...
	I1209 23:39:28.026923   60836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/apiserver.key.f936ea8a: {Name:mk5f73c7176e3363aca21fa3ea78f28eedd180d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:39:28.027022   60836 certs.go:381] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/apiserver.crt.f936ea8a -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/apiserver.crt
	I1209 23:39:28.027141   60836 certs.go:385] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/apiserver.key.f936ea8a -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/apiserver.key
	I1209 23:39:28.027241   60836 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/proxy-client.key
	I1209 23:39:28.027262   60836 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/proxy-client.crt with IP's: []
	I1209 23:39:28.165602   60836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/proxy-client.crt ...
	I1209 23:39:28.165634   60836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/proxy-client.crt: {Name:mk00800a34ba2825406db6777196a80540148fc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:39:28.270921   60836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/proxy-client.key ...
	I1209 23:39:28.270975   60836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/proxy-client.key: {Name:mk9d4d5ae7ac28d9b38de489f7a2e470bfd9688b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:39:28.271205   60836 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:39:28.271256   60836 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:39:28.271272   60836 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:39:28.271320   60836 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:39:28.271357   60836 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:39:28.271390   60836 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:39:28.271451   60836 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:39:28.272148   60836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:39:28.304599   60836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:39:28.332621   60836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:39:28.356854   60836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:39:28.380036   60836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1209 23:39:28.407139   60836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 23:39:28.429951   60836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:39:28.457734   60836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 23:39:28.485167   60836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:39:28.512877   60836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:39:28.535269   60836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:39:28.561453   60836 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:39:28.578184   60836 ssh_runner.go:195] Run: openssl version
	I1209 23:39:28.585962   60836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:39:28.598445   60836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:39:28.603166   60836 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:39:28.603235   60836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:39:28.608931   60836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:39:28.619636   60836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:39:28.630089   60836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:39:28.634704   60836 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:39:28.634771   60836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:39:28.640604   60836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:39:28.651071   60836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:39:28.664323   60836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:39:28.668469   60836 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:39:28.668532   60836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:39:28.673753   60836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:39:28.683685   60836 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:39:28.687339   60836 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 23:39:28.687398   60836 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-996806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-996806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.55 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:39:28.687461   60836 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:39:28.687499   60836 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:39:28.723097   60836 cri.go:89] found id: ""
	I1209 23:39:28.723176   60836 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:39:28.733423   60836 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:39:28.747232   60836 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:39:28.760306   60836 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:39:28.760334   60836 kubeadm.go:157] found existing configuration files:
	
	I1209 23:39:28.760400   60836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:39:28.773166   60836 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:39:28.773230   60836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:39:28.783310   60836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:39:28.792897   60836 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:39:28.792963   60836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:39:28.805803   60836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:39:28.815812   60836 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:39:28.815880   60836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:39:28.826255   60836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:39:28.836018   60836 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:39:28.836085   60836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:39:28.846237   60836 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 23:39:28.968395   60836 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 23:39:28.968488   60836 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 23:39:29.113061   60836 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 23:39:29.113181   60836 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 23:39:29.113321   60836 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 23:39:29.297159   60836 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 23:39:29.448768   60836 out.go:235]   - Generating certificates and keys ...
	I1209 23:39:29.448885   60836 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 23:39:29.449026   60836 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 23:39:29.449137   60836 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 23:39:29.713635   60836 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 23:39:29.800744   60836 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 23:39:29.880898   60836 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 23:39:30.002622   60836 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 23:39:30.002804   60836 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-996806 localhost] and IPs [192.168.50.55 127.0.0.1 ::1]
	I1209 23:39:30.089731   60836 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 23:39:30.090086   60836 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-996806 localhost] and IPs [192.168.50.55 127.0.0.1 ::1]
	I1209 23:39:30.338817   60836 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 23:39:30.648011   60836 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 23:39:31.002318   60836 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 23:39:31.002500   60836 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 23:39:31.137390   60836 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 23:39:31.225155   60836 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 23:39:31.554034   60836 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 23:39:31.607880   60836 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 23:39:31.624828   60836 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 23:39:31.626060   60836 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 23:39:31.627675   60836 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 23:39:31.775338   60836 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 23:39:31.777026   60836 out.go:235]   - Booting up control plane ...
	I1209 23:39:31.777169   60836 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 23:39:31.784113   60836 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 23:39:31.785097   60836 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 23:39:31.785926   60836 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 23:39:31.790345   60836 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 23:40:11.781416   60836 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 23:40:11.781584   60836 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:40:11.781863   60836 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:40:16.782365   60836 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:40:16.782576   60836 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:40:26.781886   60836 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:40:26.782116   60836 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:40:46.781323   60836 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:40:46.781598   60836 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:41:26.783346   60836 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:41:26.783955   60836 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:41:26.783988   60836 kubeadm.go:310] 
	I1209 23:41:26.784074   60836 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 23:41:26.784163   60836 kubeadm.go:310] 		timed out waiting for the condition
	I1209 23:41:26.784172   60836 kubeadm.go:310] 
	I1209 23:41:26.784239   60836 kubeadm.go:310] 	This error is likely caused by:
	I1209 23:41:26.784308   60836 kubeadm.go:310] 		- The kubelet is not running
	I1209 23:41:26.784547   60836 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 23:41:26.784560   60836 kubeadm.go:310] 
	I1209 23:41:26.784805   60836 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 23:41:26.784887   60836 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 23:41:26.784962   60836 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 23:41:26.784973   60836 kubeadm.go:310] 
	I1209 23:41:26.785227   60836 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 23:41:26.785455   60836 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 23:41:26.785500   60836 kubeadm.go:310] 
	I1209 23:41:26.785724   60836 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 23:41:26.785941   60836 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 23:41:26.786106   60836 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 23:41:26.786281   60836 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 23:41:26.786322   60836 kubeadm.go:310] 
	I1209 23:41:26.786687   60836 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 23:41:26.786882   60836 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 23:41:26.787152   60836 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1209 23:41:26.787299   60836 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-996806 localhost] and IPs [192.168.50.55 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-996806 localhost] and IPs [192.168.50.55 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-996806 localhost] and IPs [192.168.50.55 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-996806 localhost] and IPs [192.168.50.55 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 23:41:26.787377   60836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 23:41:28.223486   60836 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.436077826s)
	I1209 23:41:28.223594   60836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 23:41:28.242105   60836 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:41:28.254427   60836 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:41:28.254461   60836 kubeadm.go:157] found existing configuration files:
	
	I1209 23:41:28.254512   60836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:41:28.266043   60836 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:41:28.266107   60836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:41:28.276088   60836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:41:28.288104   60836 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:41:28.288172   60836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:41:28.300966   60836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:41:28.313303   60836 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:41:28.313373   60836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:41:28.323363   60836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:41:28.332520   60836 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:41:28.332592   60836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:41:28.342731   60836 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 23:41:28.405649   60836 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 23:41:28.405770   60836 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 23:41:28.541950   60836 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 23:41:28.542116   60836 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 23:41:28.542255   60836 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 23:41:28.728317   60836 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 23:41:28.730131   60836 out.go:235]   - Generating certificates and keys ...
	I1209 23:41:28.730238   60836 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 23:41:28.730339   60836 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 23:41:28.730462   60836 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 23:41:28.730555   60836 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 23:41:28.730665   60836 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 23:41:28.730756   60836 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 23:41:28.730855   60836 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 23:41:28.731236   60836 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 23:41:28.731654   60836 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 23:41:28.732140   60836 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 23:41:28.732201   60836 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 23:41:28.732298   60836 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 23:41:28.904509   60836 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 23:41:29.065967   60836 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 23:41:29.215736   60836 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 23:41:29.393006   60836 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 23:41:29.411551   60836 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 23:41:29.412969   60836 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 23:41:29.413082   60836 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 23:41:29.577118   60836 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 23:41:29.578553   60836 out.go:235]   - Booting up control plane ...
	I1209 23:41:29.578687   60836 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 23:41:29.578797   60836 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 23:41:29.579024   60836 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 23:41:29.589462   60836 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 23:41:29.592827   60836 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 23:42:09.594987   60836 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 23:42:09.595286   60836 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:42:09.595518   60836 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:42:14.595789   60836 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:42:14.596020   60836 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:42:24.596514   60836 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:42:24.596696   60836 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:42:44.596059   60836 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:42:44.596328   60836 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:43:24.595743   60836 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:43:24.595984   60836 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:43:24.595995   60836 kubeadm.go:310] 
	I1209 23:43:24.596030   60836 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 23:43:24.596064   60836 kubeadm.go:310] 		timed out waiting for the condition
	I1209 23:43:24.596070   60836 kubeadm.go:310] 
	I1209 23:43:24.596104   60836 kubeadm.go:310] 	This error is likely caused by:
	I1209 23:43:24.596158   60836 kubeadm.go:310] 		- The kubelet is not running
	I1209 23:43:24.596256   60836 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 23:43:24.596264   60836 kubeadm.go:310] 
	I1209 23:43:24.596394   60836 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 23:43:24.596461   60836 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 23:43:24.596504   60836 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 23:43:24.596515   60836 kubeadm.go:310] 
	I1209 23:43:24.596657   60836 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 23:43:24.596773   60836 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 23:43:24.596786   60836 kubeadm.go:310] 
	I1209 23:43:24.596914   60836 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 23:43:24.597047   60836 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 23:43:24.597169   60836 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 23:43:24.597271   60836 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 23:43:24.597283   60836 kubeadm.go:310] 
	I1209 23:43:24.597705   60836 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 23:43:24.597826   60836 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 23:43:24.597915   60836 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 23:43:24.597986   60836 kubeadm.go:394] duration metric: took 3m55.910591313s to StartCluster
	I1209 23:43:24.598044   60836 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 23:43:24.598108   60836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 23:43:24.631548   60836 cri.go:89] found id: ""
	I1209 23:43:24.631592   60836 logs.go:282] 0 containers: []
	W1209 23:43:24.631611   60836 logs.go:284] No container was found matching "kube-apiserver"
	I1209 23:43:24.631619   60836 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 23:43:24.631678   60836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 23:43:24.663120   60836 cri.go:89] found id: ""
	I1209 23:43:24.663150   60836 logs.go:282] 0 containers: []
	W1209 23:43:24.663157   60836 logs.go:284] No container was found matching "etcd"
	I1209 23:43:24.663163   60836 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 23:43:24.663222   60836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 23:43:24.694646   60836 cri.go:89] found id: ""
	I1209 23:43:24.694670   60836 logs.go:282] 0 containers: []
	W1209 23:43:24.694677   60836 logs.go:284] No container was found matching "coredns"
	I1209 23:43:24.694683   60836 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 23:43:24.694738   60836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 23:43:24.733155   60836 cri.go:89] found id: ""
	I1209 23:43:24.733182   60836 logs.go:282] 0 containers: []
	W1209 23:43:24.733190   60836 logs.go:284] No container was found matching "kube-scheduler"
	I1209 23:43:24.733196   60836 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 23:43:24.733275   60836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 23:43:24.766987   60836 cri.go:89] found id: ""
	I1209 23:43:24.767017   60836 logs.go:282] 0 containers: []
	W1209 23:43:24.767028   60836 logs.go:284] No container was found matching "kube-proxy"
	I1209 23:43:24.767036   60836 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 23:43:24.767093   60836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 23:43:24.797701   60836 cri.go:89] found id: ""
	I1209 23:43:24.797725   60836 logs.go:282] 0 containers: []
	W1209 23:43:24.797733   60836 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 23:43:24.797739   60836 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 23:43:24.797787   60836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 23:43:24.832462   60836 cri.go:89] found id: ""
	I1209 23:43:24.832487   60836 logs.go:282] 0 containers: []
	W1209 23:43:24.832494   60836 logs.go:284] No container was found matching "kindnet"
	I1209 23:43:24.832508   60836 logs.go:123] Gathering logs for kubelet ...
	I1209 23:43:24.832519   60836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 23:43:24.882141   60836 logs.go:123] Gathering logs for dmesg ...
	I1209 23:43:24.882179   60836 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 23:43:24.894997   60836 logs.go:123] Gathering logs for describe nodes ...
	I1209 23:43:24.895029   60836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 23:43:25.005191   60836 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 23:43:25.005216   60836 logs.go:123] Gathering logs for CRI-O ...
	I1209 23:43:25.005233   60836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 23:43:25.102331   60836 logs.go:123] Gathering logs for container status ...
	I1209 23:43:25.102365   60836 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1209 23:43:25.153640   60836 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1209 23:43:25.153696   60836 out.go:270] * 
	* 
	W1209 23:43:25.153757   60836 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 23:43:25.153778   60836 out.go:270] * 
	* 
	W1209 23:43:25.154699   60836 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 23:43:25.158026   60836 out.go:201] 
	W1209 23:43:25.159365   60836 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 23:43:25.159417   60836 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 23:43:25.159442   60836 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 23:43:25.160792   60836 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-996806 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-996806
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-996806: (1.374354118s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-996806 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-996806 status --format={{.Host}}: exit status 7 (77.947287ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-996806 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-996806 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m28.454125333s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-996806 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-996806 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-996806 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (102.909769ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-996806] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-996806
	    minikube start -p kubernetes-upgrade-996806 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9968062 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-996806 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-996806 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-996806 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m18.520109894s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-12-09 23:47:13.81112234 +0000 UTC m=+4518.242737446
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-996806 -n kubernetes-upgrade-996806
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-996806 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-996806 logs -n 25: (2.239881773s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-030585 sudo                               | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:47 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo                               | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | systemctl cat kubelet                                |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo                               | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo cat                           | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo cat                           | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo                               | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo                               | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | systemctl cat docker                                 |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo cat                           | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo docker                        | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo                               | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo                               | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | systemctl cat cri-docker                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo cat                           | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo cat                           | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo                               | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo                               | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo                               | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | systemctl cat containerd                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo cat                           | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo cat                           | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo                               | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo                               | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo                               | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo find                          | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p kindnet-030585 sudo crio                          | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p kindnet-030585                                    | kindnet-030585        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	| start   | -p custom-flannel-030585                             | custom-flannel-030585 | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:47:06
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:47:06.139704   72683 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:47:06.140112   72683 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:47:06.140126   72683 out.go:358] Setting ErrFile to fd 2...
	I1209 23:47:06.140130   72683 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:47:06.140339   72683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 23:47:06.141008   72683 out.go:352] Setting JSON to false
	I1209 23:47:06.142129   72683 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8977,"bootTime":1733779049,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:47:06.142227   72683 start.go:139] virtualization: kvm guest
	I1209 23:47:06.144848   72683 out.go:177] * [custom-flannel-030585] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:47:06.146466   72683 notify.go:220] Checking for updates...
	I1209 23:47:06.146515   72683 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:47:06.148281   72683 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:47:06.149648   72683 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:47:06.150964   72683 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 23:47:06.152306   72683 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:47:06.153689   72683 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:47:06.155528   72683 config.go:182] Loaded profile config "calico-030585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:47:06.155719   72683 config.go:182] Loaded profile config "cert-expiration-801840": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:47:06.155860   72683 config.go:182] Loaded profile config "kubernetes-upgrade-996806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:47:06.155983   72683 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:47:06.194726   72683 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 23:47:06.196174   72683 start.go:297] selected driver: kvm2
	I1209 23:47:06.196209   72683 start.go:901] validating driver "kvm2" against <nil>
	I1209 23:47:06.196220   72683 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:47:06.197056   72683 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:47:06.197181   72683 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 23:47:06.214969   72683 install.go:137] /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1209 23:47:06.215038   72683 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:47:06.215260   72683 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:47:06.215301   72683 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1209 23:47:06.215314   72683 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1209 23:47:06.215371   72683 start.go:340] cluster config:
	{Name:custom-flannel-030585 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-030585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:47:06.215478   72683 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:47:06.217563   72683 out.go:177] * Starting "custom-flannel-030585" primary control-plane node in "custom-flannel-030585" cluster
	I1209 23:47:08.336430   71184 start.go:364] duration metric: took 23.222354122s to acquireMachinesLock for "cert-expiration-801840"
	I1209 23:47:08.336468   71184 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:47:08.336473   71184 fix.go:54] fixHost starting: 
	I1209 23:47:08.336828   71184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:47:08.336872   71184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:47:08.357214   71184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33555
	I1209 23:47:08.357584   71184 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:47:08.358042   71184 main.go:141] libmachine: Using API Version  1
	I1209 23:47:08.358063   71184 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:47:08.358489   71184 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:47:08.358687   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .DriverName
	I1209 23:47:08.358862   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetState
	I1209 23:47:08.360634   71184 fix.go:112] recreateIfNeeded on cert-expiration-801840: state=Running err=<nil>
	W1209 23:47:08.360649   71184 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:47:08.362693   71184 out.go:177] * Updating the running kvm2 "cert-expiration-801840" VM ...
	I1209 23:47:06.718074   71053 main.go:141] libmachine: (calico-030585) DBG | Getting to WaitForSSH function...
	I1209 23:47:06.720717   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:06.721165   71053 main.go:141] libmachine: (calico-030585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:76:92", ip: ""} in network mk-calico-030585: {Iface:virbr4 ExpiryTime:2024-12-10 00:46:55 +0000 UTC Type:0 Mac:52:54:00:01:76:92 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:calico-030585 Clientid:01:52:54:00:01:76:92}
	I1209 23:47:06.721197   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined IP address 192.168.72.217 and MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:06.721372   71053 main.go:141] libmachine: (calico-030585) DBG | Using SSH client type: external
	I1209 23:47:06.721401   71053 main.go:141] libmachine: (calico-030585) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/calico-030585/id_rsa (-rw-------)
	I1209 23:47:06.721433   71053 main.go:141] libmachine: (calico-030585) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/calico-030585/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:47:06.721447   71053 main.go:141] libmachine: (calico-030585) DBG | About to run SSH command:
	I1209 23:47:06.721458   71053 main.go:141] libmachine: (calico-030585) DBG | exit 0
	I1209 23:47:06.852168   71053 main.go:141] libmachine: (calico-030585) DBG | SSH cmd err, output: <nil>: 
	I1209 23:47:06.852492   71053 main.go:141] libmachine: (calico-030585) KVM machine creation complete!
	I1209 23:47:06.852888   71053 main.go:141] libmachine: (calico-030585) Calling .GetConfigRaw
	I1209 23:47:06.853441   71053 main.go:141] libmachine: (calico-030585) Calling .DriverName
	I1209 23:47:06.853666   71053 main.go:141] libmachine: (calico-030585) Calling .DriverName
	I1209 23:47:06.853837   71053 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 23:47:06.853854   71053 main.go:141] libmachine: (calico-030585) Calling .GetState
	I1209 23:47:06.855289   71053 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 23:47:06.855305   71053 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 23:47:06.855311   71053 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 23:47:06.855316   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHHostname
	I1209 23:47:06.857623   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:06.857993   71053 main.go:141] libmachine: (calico-030585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:76:92", ip: ""} in network mk-calico-030585: {Iface:virbr4 ExpiryTime:2024-12-10 00:46:55 +0000 UTC Type:0 Mac:52:54:00:01:76:92 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:calico-030585 Clientid:01:52:54:00:01:76:92}
	I1209 23:47:06.858021   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined IP address 192.168.72.217 and MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:06.858147   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHPort
	I1209 23:47:06.858328   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHKeyPath
	I1209 23:47:06.858510   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHKeyPath
	I1209 23:47:06.858664   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHUsername
	I1209 23:47:06.858816   71053 main.go:141] libmachine: Using SSH client type: native
	I1209 23:47:06.859066   71053 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I1209 23:47:06.859082   71053 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 23:47:06.974993   71053 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:47:06.975018   71053 main.go:141] libmachine: Detecting the provisioner...
	I1209 23:47:06.975028   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHHostname
	I1209 23:47:06.978371   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:06.978838   71053 main.go:141] libmachine: (calico-030585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:76:92", ip: ""} in network mk-calico-030585: {Iface:virbr4 ExpiryTime:2024-12-10 00:46:55 +0000 UTC Type:0 Mac:52:54:00:01:76:92 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:calico-030585 Clientid:01:52:54:00:01:76:92}
	I1209 23:47:06.978878   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined IP address 192.168.72.217 and MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:06.979005   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHPort
	I1209 23:47:06.979222   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHKeyPath
	I1209 23:47:06.979403   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHKeyPath
	I1209 23:47:06.979555   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHUsername
	I1209 23:47:06.979787   71053 main.go:141] libmachine: Using SSH client type: native
	I1209 23:47:06.980007   71053 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I1209 23:47:06.980020   71053 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 23:47:07.100099   71053 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 23:47:07.100163   71053 main.go:141] libmachine: found compatible host: buildroot
	I1209 23:47:07.100177   71053 main.go:141] libmachine: Provisioning with buildroot...
	I1209 23:47:07.100187   71053 main.go:141] libmachine: (calico-030585) Calling .GetMachineName
	I1209 23:47:07.100450   71053 buildroot.go:166] provisioning hostname "calico-030585"
	I1209 23:47:07.100474   71053 main.go:141] libmachine: (calico-030585) Calling .GetMachineName
	I1209 23:47:07.100649   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHHostname
	I1209 23:47:07.103759   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:07.104204   71053 main.go:141] libmachine: (calico-030585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:76:92", ip: ""} in network mk-calico-030585: {Iface:virbr4 ExpiryTime:2024-12-10 00:46:55 +0000 UTC Type:0 Mac:52:54:00:01:76:92 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:calico-030585 Clientid:01:52:54:00:01:76:92}
	I1209 23:47:07.104231   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined IP address 192.168.72.217 and MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:07.104440   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHPort
	I1209 23:47:07.104639   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHKeyPath
	I1209 23:47:07.104846   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHKeyPath
	I1209 23:47:07.105013   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHUsername
	I1209 23:47:07.105216   71053 main.go:141] libmachine: Using SSH client type: native
	I1209 23:47:07.105436   71053 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I1209 23:47:07.105454   71053 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-030585 && echo "calico-030585" | sudo tee /etc/hostname
	I1209 23:47:07.239436   71053 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-030585
	
	I1209 23:47:07.239468   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHHostname
	I1209 23:47:07.242983   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:07.243459   71053 main.go:141] libmachine: (calico-030585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:76:92", ip: ""} in network mk-calico-030585: {Iface:virbr4 ExpiryTime:2024-12-10 00:46:55 +0000 UTC Type:0 Mac:52:54:00:01:76:92 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:calico-030585 Clientid:01:52:54:00:01:76:92}
	I1209 23:47:07.243510   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined IP address 192.168.72.217 and MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:07.243852   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHPort
	I1209 23:47:07.244077   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHKeyPath
	I1209 23:47:07.244293   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHKeyPath
	I1209 23:47:07.244483   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHUsername
	I1209 23:47:07.244687   71053 main.go:141] libmachine: Using SSH client type: native
	I1209 23:47:07.244927   71053 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I1209 23:47:07.244953   71053 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-030585' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-030585/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-030585' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:47:07.373255   71053 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:47:07.373282   71053 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:47:07.373340   71053 buildroot.go:174] setting up certificates
	I1209 23:47:07.373353   71053 provision.go:84] configureAuth start
	I1209 23:47:07.373371   71053 main.go:141] libmachine: (calico-030585) Calling .GetMachineName
	I1209 23:47:07.373684   71053 main.go:141] libmachine: (calico-030585) Calling .GetIP
	I1209 23:47:07.376767   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:07.377156   71053 main.go:141] libmachine: (calico-030585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:76:92", ip: ""} in network mk-calico-030585: {Iface:virbr4 ExpiryTime:2024-12-10 00:46:55 +0000 UTC Type:0 Mac:52:54:00:01:76:92 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:calico-030585 Clientid:01:52:54:00:01:76:92}
	I1209 23:47:07.377187   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined IP address 192.168.72.217 and MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:07.377378   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHHostname
	I1209 23:47:07.379596   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:07.379961   71053 main.go:141] libmachine: (calico-030585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:76:92", ip: ""} in network mk-calico-030585: {Iface:virbr4 ExpiryTime:2024-12-10 00:46:55 +0000 UTC Type:0 Mac:52:54:00:01:76:92 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:calico-030585 Clientid:01:52:54:00:01:76:92}
	I1209 23:47:07.379993   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined IP address 192.168.72.217 and MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:07.380093   71053 provision.go:143] copyHostCerts
	I1209 23:47:07.380153   71053 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:47:07.380175   71053 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:47:07.380249   71053 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:47:07.380367   71053 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:47:07.380378   71053 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:47:07.380407   71053 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:47:07.380461   71053 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:47:07.380468   71053 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:47:07.380490   71053 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:47:07.380535   71053 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.calico-030585 san=[127.0.0.1 192.168.72.217 calico-030585 localhost minikube]
	I1209 23:47:07.670616   71053 provision.go:177] copyRemoteCerts
	I1209 23:47:07.670686   71053 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:47:07.670708   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHHostname
	I1209 23:47:07.673838   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:07.674187   71053 main.go:141] libmachine: (calico-030585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:76:92", ip: ""} in network mk-calico-030585: {Iface:virbr4 ExpiryTime:2024-12-10 00:46:55 +0000 UTC Type:0 Mac:52:54:00:01:76:92 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:calico-030585 Clientid:01:52:54:00:01:76:92}
	I1209 23:47:07.674221   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined IP address 192.168.72.217 and MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:07.674446   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHPort
	I1209 23:47:07.674681   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHKeyPath
	I1209 23:47:07.674845   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHUsername
	I1209 23:47:07.675025   71053 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/calico-030585/id_rsa Username:docker}
	I1209 23:47:07.768023   71053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:47:07.794852   71053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 23:47:07.820311   71053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 23:47:07.844562   71053 provision.go:87] duration metric: took 471.191439ms to configureAuth
	I1209 23:47:07.844597   71053 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:47:07.844789   71053 config.go:182] Loaded profile config "calico-030585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:47:07.844867   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHHostname
	I1209 23:47:07.848662   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:07.849121   71053 main.go:141] libmachine: (calico-030585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:76:92", ip: ""} in network mk-calico-030585: {Iface:virbr4 ExpiryTime:2024-12-10 00:46:55 +0000 UTC Type:0 Mac:52:54:00:01:76:92 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:calico-030585 Clientid:01:52:54:00:01:76:92}
	I1209 23:47:07.849149   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined IP address 192.168.72.217 and MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:07.849332   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHPort
	I1209 23:47:07.849561   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHKeyPath
	I1209 23:47:07.849780   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHKeyPath
	I1209 23:47:07.849999   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHUsername
	I1209 23:47:07.850173   71053 main.go:141] libmachine: Using SSH client type: native
	I1209 23:47:07.850376   71053 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I1209 23:47:07.850395   71053 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:47:08.081472   71053 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:47:08.081506   71053 main.go:141] libmachine: Checking connection to Docker...
	I1209 23:47:08.081517   71053 main.go:141] libmachine: (calico-030585) Calling .GetURL
	I1209 23:47:08.082872   71053 main.go:141] libmachine: (calico-030585) DBG | Using libvirt version 6000000
	I1209 23:47:08.084810   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:08.085173   71053 main.go:141] libmachine: (calico-030585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:76:92", ip: ""} in network mk-calico-030585: {Iface:virbr4 ExpiryTime:2024-12-10 00:46:55 +0000 UTC Type:0 Mac:52:54:00:01:76:92 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:calico-030585 Clientid:01:52:54:00:01:76:92}
	I1209 23:47:08.085204   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined IP address 192.168.72.217 and MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:08.085331   71053 main.go:141] libmachine: Docker is up and running!
	I1209 23:47:08.085343   71053 main.go:141] libmachine: Reticulating splines...
	I1209 23:47:08.085350   71053 client.go:171] duration metric: took 27.787194854s to LocalClient.Create
	I1209 23:47:08.085371   71053 start.go:167] duration metric: took 27.787286332s to libmachine.API.Create "calico-030585"
	I1209 23:47:08.085380   71053 start.go:293] postStartSetup for "calico-030585" (driver="kvm2")
	I1209 23:47:08.085389   71053 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:47:08.085414   71053 main.go:141] libmachine: (calico-030585) Calling .DriverName
	I1209 23:47:08.085657   71053 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:47:08.085682   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHHostname
	I1209 23:47:08.087675   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:08.088026   71053 main.go:141] libmachine: (calico-030585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:76:92", ip: ""} in network mk-calico-030585: {Iface:virbr4 ExpiryTime:2024-12-10 00:46:55 +0000 UTC Type:0 Mac:52:54:00:01:76:92 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:calico-030585 Clientid:01:52:54:00:01:76:92}
	I1209 23:47:08.088059   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined IP address 192.168.72.217 and MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:08.088216   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHPort
	I1209 23:47:08.088393   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHKeyPath
	I1209 23:47:08.088545   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHUsername
	I1209 23:47:08.088693   71053 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/calico-030585/id_rsa Username:docker}
	I1209 23:47:08.173933   71053 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:47:08.177934   71053 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:47:08.177964   71053 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:47:08.178064   71053 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:47:08.178135   71053 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:47:08.178219   71053 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:47:08.187359   71053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:47:08.210285   71053 start.go:296] duration metric: took 124.890403ms for postStartSetup
	I1209 23:47:08.210367   71053 main.go:141] libmachine: (calico-030585) Calling .GetConfigRaw
	I1209 23:47:08.210992   71053 main.go:141] libmachine: (calico-030585) Calling .GetIP
	I1209 23:47:08.213681   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:08.214051   71053 main.go:141] libmachine: (calico-030585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:76:92", ip: ""} in network mk-calico-030585: {Iface:virbr4 ExpiryTime:2024-12-10 00:46:55 +0000 UTC Type:0 Mac:52:54:00:01:76:92 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:calico-030585 Clientid:01:52:54:00:01:76:92}
	I1209 23:47:08.214082   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined IP address 192.168.72.217 and MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:08.214294   71053 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/config.json ...
	I1209 23:47:08.214491   71053 start.go:128] duration metric: took 27.936705858s to createHost
	I1209 23:47:08.214511   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHHostname
	I1209 23:47:08.216959   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:08.217295   71053 main.go:141] libmachine: (calico-030585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:76:92", ip: ""} in network mk-calico-030585: {Iface:virbr4 ExpiryTime:2024-12-10 00:46:55 +0000 UTC Type:0 Mac:52:54:00:01:76:92 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:calico-030585 Clientid:01:52:54:00:01:76:92}
	I1209 23:47:08.217323   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined IP address 192.168.72.217 and MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:08.217519   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHPort
	I1209 23:47:08.217701   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHKeyPath
	I1209 23:47:08.217882   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHKeyPath
	I1209 23:47:08.217997   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHUsername
	I1209 23:47:08.218130   71053 main.go:141] libmachine: Using SSH client type: native
	I1209 23:47:08.218323   71053 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I1209 23:47:08.218336   71053 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:47:08.336288   71053 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788028.307270304
	
	I1209 23:47:08.336311   71053 fix.go:216] guest clock: 1733788028.307270304
	I1209 23:47:08.336321   71053 fix.go:229] Guest: 2024-12-09 23:47:08.307270304 +0000 UTC Remote: 2024-12-09 23:47:08.214501744 +0000 UTC m=+28.054437245 (delta=92.76856ms)
	I1209 23:47:08.336338   71053 fix.go:200] guest clock delta is within tolerance: 92.76856ms
	I1209 23:47:08.336343   71053 start.go:83] releasing machines lock for "calico-030585", held for 28.05866203s
	I1209 23:47:08.336366   71053 main.go:141] libmachine: (calico-030585) Calling .DriverName
	I1209 23:47:08.336625   71053 main.go:141] libmachine: (calico-030585) Calling .GetIP
	I1209 23:47:08.339701   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:08.340064   71053 main.go:141] libmachine: (calico-030585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:76:92", ip: ""} in network mk-calico-030585: {Iface:virbr4 ExpiryTime:2024-12-10 00:46:55 +0000 UTC Type:0 Mac:52:54:00:01:76:92 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:calico-030585 Clientid:01:52:54:00:01:76:92}
	I1209 23:47:08.340096   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined IP address 192.168.72.217 and MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:08.340373   71053 main.go:141] libmachine: (calico-030585) Calling .DriverName
	I1209 23:47:08.340858   71053 main.go:141] libmachine: (calico-030585) Calling .DriverName
	I1209 23:47:08.341043   71053 main.go:141] libmachine: (calico-030585) Calling .DriverName
	I1209 23:47:08.341130   71053 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:47:08.341172   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHHostname
	I1209 23:47:08.341304   71053 ssh_runner.go:195] Run: cat /version.json
	I1209 23:47:08.341332   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHHostname
	I1209 23:47:08.344307   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:08.344495   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:08.344700   71053 main.go:141] libmachine: (calico-030585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:76:92", ip: ""} in network mk-calico-030585: {Iface:virbr4 ExpiryTime:2024-12-10 00:46:55 +0000 UTC Type:0 Mac:52:54:00:01:76:92 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:calico-030585 Clientid:01:52:54:00:01:76:92}
	I1209 23:47:08.344735   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined IP address 192.168.72.217 and MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:08.344861   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHPort
	I1209 23:47:08.344892   71053 main.go:141] libmachine: (calico-030585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:76:92", ip: ""} in network mk-calico-030585: {Iface:virbr4 ExpiryTime:2024-12-10 00:46:55 +0000 UTC Type:0 Mac:52:54:00:01:76:92 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:calico-030585 Clientid:01:52:54:00:01:76:92}
	I1209 23:47:08.344922   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined IP address 192.168.72.217 and MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:08.345035   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHKeyPath
	I1209 23:47:08.345102   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHPort
	I1209 23:47:08.345221   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHUsername
	I1209 23:47:08.345293   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHKeyPath
	I1209 23:47:08.345385   71053 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/calico-030585/id_rsa Username:docker}
	I1209 23:47:08.345444   71053 main.go:141] libmachine: (calico-030585) Calling .GetSSHUsername
	I1209 23:47:08.345560   71053 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/calico-030585/id_rsa Username:docker}
	I1209 23:47:08.452136   71053 ssh_runner.go:195] Run: systemctl --version
	I1209 23:47:08.458352   71053 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:47:08.620776   71053 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:47:08.627928   71053 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:47:08.627999   71053 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:47:08.643385   71053 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:47:08.643412   71053 start.go:495] detecting cgroup driver to use...
	I1209 23:47:08.643483   71053 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:47:08.662593   71053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:47:08.677439   71053 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:47:08.677523   71053 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:47:08.692514   71053 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:47:08.706316   71053 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:47:08.843209   71053 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:47:09.008453   71053 docker.go:233] disabling docker service ...
	I1209 23:47:09.008528   71053 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:47:09.023977   71053 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:47:09.040250   71053 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:47:09.175293   71053 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:47:09.327961   71053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:47:09.342022   71053 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:47:09.364925   71053 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:47:09.364994   71053 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:47:09.375640   71053 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:47:09.375719   71053 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:47:09.386298   71053 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:47:09.396844   71053 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:47:09.406739   71053 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:47:09.420975   71053 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:47:09.435499   71053 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:47:09.453409   71053 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:47:09.463576   71053 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:47:09.473046   71053 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:47:09.473097   71053 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:47:09.486315   71053 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:47:09.498337   71053 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:47:09.612815   71053 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:47:09.706671   71053 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:47:09.706744   71053 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:47:09.711221   71053 start.go:563] Will wait 60s for crictl version
	I1209 23:47:09.711286   71053 ssh_runner.go:195] Run: which crictl
	I1209 23:47:09.714810   71053 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:47:09.752258   71053 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:47:09.752345   71053 ssh_runner.go:195] Run: crio --version
	I1209 23:47:09.779707   71053 ssh_runner.go:195] Run: crio --version
	I1209 23:47:09.811493   71053 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:47:08.364426   71184 machine.go:93] provisionDockerMachine start ...
	I1209 23:47:08.364440   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .DriverName
	I1209 23:47:08.364634   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHHostname
	I1209 23:47:08.367550   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | domain cert-expiration-801840 has defined MAC address 52:54:00:f6:9a:8c in network mk-cert-expiration-801840
	I1209 23:47:08.367986   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:9a:8c", ip: ""} in network mk-cert-expiration-801840: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:15 +0000 UTC Type:0 Mac:52:54:00:f6:9a:8c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:cert-expiration-801840 Clientid:01:52:54:00:f6:9a:8c}
	I1209 23:47:08.368004   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | domain cert-expiration-801840 has defined IP address 192.168.39.220 and MAC address 52:54:00:f6:9a:8c in network mk-cert-expiration-801840
	I1209 23:47:08.368138   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHPort
	I1209 23:47:08.368292   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHKeyPath
	I1209 23:47:08.368499   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHKeyPath
	I1209 23:47:08.368673   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHUsername
	I1209 23:47:08.368857   71184 main.go:141] libmachine: Using SSH client type: native
	I1209 23:47:08.369101   71184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1209 23:47:08.369110   71184 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:47:08.473003   71184 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-801840
	
	I1209 23:47:08.473026   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetMachineName
	I1209 23:47:08.473317   71184 buildroot.go:166] provisioning hostname "cert-expiration-801840"
	I1209 23:47:08.473340   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetMachineName
	I1209 23:47:08.473578   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHHostname
	I1209 23:47:08.476668   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | domain cert-expiration-801840 has defined MAC address 52:54:00:f6:9a:8c in network mk-cert-expiration-801840
	I1209 23:47:08.477010   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:9a:8c", ip: ""} in network mk-cert-expiration-801840: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:15 +0000 UTC Type:0 Mac:52:54:00:f6:9a:8c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:cert-expiration-801840 Clientid:01:52:54:00:f6:9a:8c}
	I1209 23:47:08.477034   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | domain cert-expiration-801840 has defined IP address 192.168.39.220 and MAC address 52:54:00:f6:9a:8c in network mk-cert-expiration-801840
	I1209 23:47:08.477185   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHPort
	I1209 23:47:08.477451   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHKeyPath
	I1209 23:47:08.477630   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHKeyPath
	I1209 23:47:08.477783   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHUsername
	I1209 23:47:08.478013   71184 main.go:141] libmachine: Using SSH client type: native
	I1209 23:47:08.478231   71184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1209 23:47:08.478239   71184 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-801840 && echo "cert-expiration-801840" | sudo tee /etc/hostname
	I1209 23:47:08.603991   71184 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-801840
	
	I1209 23:47:08.604009   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHHostname
	I1209 23:47:08.606473   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | domain cert-expiration-801840 has defined MAC address 52:54:00:f6:9a:8c in network mk-cert-expiration-801840
	I1209 23:47:08.606866   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:9a:8c", ip: ""} in network mk-cert-expiration-801840: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:15 +0000 UTC Type:0 Mac:52:54:00:f6:9a:8c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:cert-expiration-801840 Clientid:01:52:54:00:f6:9a:8c}
	I1209 23:47:08.606891   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | domain cert-expiration-801840 has defined IP address 192.168.39.220 and MAC address 52:54:00:f6:9a:8c in network mk-cert-expiration-801840
	I1209 23:47:08.607041   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHPort
	I1209 23:47:08.607257   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHKeyPath
	I1209 23:47:08.607461   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHKeyPath
	I1209 23:47:08.607677   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHUsername
	I1209 23:47:08.607849   71184 main.go:141] libmachine: Using SSH client type: native
	I1209 23:47:08.608002   71184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1209 23:47:08.608014   71184 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-801840' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-801840/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-801840' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:47:08.721506   71184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:47:08.721525   71184 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:47:08.721545   71184 buildroot.go:174] setting up certificates
	I1209 23:47:08.721556   71184 provision.go:84] configureAuth start
	I1209 23:47:08.721566   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetMachineName
	I1209 23:47:08.721852   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetIP
	I1209 23:47:08.725136   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | domain cert-expiration-801840 has defined MAC address 52:54:00:f6:9a:8c in network mk-cert-expiration-801840
	I1209 23:47:08.725530   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:9a:8c", ip: ""} in network mk-cert-expiration-801840: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:15 +0000 UTC Type:0 Mac:52:54:00:f6:9a:8c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:cert-expiration-801840 Clientid:01:52:54:00:f6:9a:8c}
	I1209 23:47:08.725546   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | domain cert-expiration-801840 has defined IP address 192.168.39.220 and MAC address 52:54:00:f6:9a:8c in network mk-cert-expiration-801840
	I1209 23:47:08.725837   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHHostname
	I1209 23:47:08.728887   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | domain cert-expiration-801840 has defined MAC address 52:54:00:f6:9a:8c in network mk-cert-expiration-801840
	I1209 23:47:08.729291   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:9a:8c", ip: ""} in network mk-cert-expiration-801840: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:15 +0000 UTC Type:0 Mac:52:54:00:f6:9a:8c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:cert-expiration-801840 Clientid:01:52:54:00:f6:9a:8c}
	I1209 23:47:08.729313   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | domain cert-expiration-801840 has defined IP address 192.168.39.220 and MAC address 52:54:00:f6:9a:8c in network mk-cert-expiration-801840
	I1209 23:47:08.729550   71184 provision.go:143] copyHostCerts
	I1209 23:47:08.729605   71184 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:47:08.729619   71184 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:47:08.729688   71184 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:47:08.729799   71184 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:47:08.729805   71184 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:47:08.729835   71184 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:47:08.729902   71184 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:47:08.729906   71184 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:47:08.729932   71184 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:47:08.729990   71184 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-801840 san=[127.0.0.1 192.168.39.220 cert-expiration-801840 localhost minikube]
	I1209 23:47:08.896128   71184 provision.go:177] copyRemoteCerts
	I1209 23:47:08.896180   71184 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:47:08.896200   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHHostname
	I1209 23:47:08.899488   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | domain cert-expiration-801840 has defined MAC address 52:54:00:f6:9a:8c in network mk-cert-expiration-801840
	I1209 23:47:08.900008   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:9a:8c", ip: ""} in network mk-cert-expiration-801840: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:15 +0000 UTC Type:0 Mac:52:54:00:f6:9a:8c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:cert-expiration-801840 Clientid:01:52:54:00:f6:9a:8c}
	I1209 23:47:08.900034   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | domain cert-expiration-801840 has defined IP address 192.168.39.220 and MAC address 52:54:00:f6:9a:8c in network mk-cert-expiration-801840
	I1209 23:47:08.900273   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHPort
	I1209 23:47:08.900493   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHKeyPath
	I1209 23:47:08.900676   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHUsername
	I1209 23:47:08.900844   71184 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/cert-expiration-801840/id_rsa Username:docker}
	I1209 23:47:08.989298   71184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:47:09.017259   71184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 23:47:09.046662   71184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:47:09.078081   71184 provision.go:87] duration metric: took 356.462631ms to configureAuth
	I1209 23:47:09.078100   71184 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:47:09.078267   71184 config.go:182] Loaded profile config "cert-expiration-801840": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:47:09.078322   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHHostname
	I1209 23:47:09.081466   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | domain cert-expiration-801840 has defined MAC address 52:54:00:f6:9a:8c in network mk-cert-expiration-801840
	I1209 23:47:09.081864   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:9a:8c", ip: ""} in network mk-cert-expiration-801840: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:15 +0000 UTC Type:0 Mac:52:54:00:f6:9a:8c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:cert-expiration-801840 Clientid:01:52:54:00:f6:9a:8c}
	I1209 23:47:09.081885   71184 main.go:141] libmachine: (cert-expiration-801840) DBG | domain cert-expiration-801840 has defined IP address 192.168.39.220 and MAC address 52:54:00:f6:9a:8c in network mk-cert-expiration-801840
	I1209 23:47:09.082062   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHPort
	I1209 23:47:09.082259   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHKeyPath
	I1209 23:47:09.082411   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHKeyPath
	I1209 23:47:09.082556   71184 main.go:141] libmachine: (cert-expiration-801840) Calling .GetSSHUsername
	I1209 23:47:09.082770   71184 main.go:141] libmachine: Using SSH client type: native
	I1209 23:47:09.082976   71184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1209 23:47:09.082991   71184 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:47:09.812840   71053 main.go:141] libmachine: (calico-030585) Calling .GetIP
	I1209 23:47:09.815736   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:09.816106   71053 main.go:141] libmachine: (calico-030585) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:76:92", ip: ""} in network mk-calico-030585: {Iface:virbr4 ExpiryTime:2024-12-10 00:46:55 +0000 UTC Type:0 Mac:52:54:00:01:76:92 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:calico-030585 Clientid:01:52:54:00:01:76:92}
	I1209 23:47:09.816137   71053 main.go:141] libmachine: (calico-030585) DBG | domain calico-030585 has defined IP address 192.168.72.217 and MAC address 52:54:00:01:76:92 in network mk-calico-030585
	I1209 23:47:09.816382   71053 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1209 23:47:09.820728   71053 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:47:09.834602   71053 kubeadm.go:883] updating cluster {Name:calico-030585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:calico-030585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.72.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:47:09.834742   71053 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:47:09.834804   71053 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:47:09.866298   71053 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:47:09.866360   71053 ssh_runner.go:195] Run: which lz4
	I1209 23:47:09.870444   71053 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:47:09.874495   71053 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:47:09.874537   71053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 23:47:05.652695   68772 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.367244796s)
	I1209 23:47:05.652731   68772 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:47:05.870136   68772 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:47:05.936254   68772 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:47:06.025055   68772 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:47:06.025149   68772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:47:06.525497   68772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:47:07.026013   68772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:47:07.045988   68772 api_server.go:72] duration metric: took 1.020933765s to wait for apiserver process to appear ...
	I1209 23:47:07.046015   68772 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:47:07.046035   68772 api_server.go:253] Checking apiserver healthz at https://192.168.50.55:8443/healthz ...
	I1209 23:47:09.817601   68772 api_server.go:279] https://192.168.50.55:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:47:09.817626   68772 api_server.go:103] status: https://192.168.50.55:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:47:09.817641   68772 api_server.go:253] Checking apiserver healthz at https://192.168.50.55:8443/healthz ...
	I1209 23:47:09.850980   68772 api_server.go:279] https://192.168.50.55:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:47:09.851013   68772 api_server.go:103] status: https://192.168.50.55:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:47:10.046159   68772 api_server.go:253] Checking apiserver healthz at https://192.168.50.55:8443/healthz ...
	I1209 23:47:10.056519   68772 api_server.go:279] https://192.168.50.55:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:47:10.056557   68772 api_server.go:103] status: https://192.168.50.55:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:47:06.219113   72683 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:47:06.219156   72683 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 23:47:06.219169   72683 cache.go:56] Caching tarball of preloaded images
	I1209 23:47:06.219246   72683 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 23:47:06.219262   72683 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 23:47:06.219380   72683 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/config.json ...
	I1209 23:47:06.219401   72683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/config.json: {Name:mk8c09e16811aa598176ab3dd79aac81fd288352 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:47:06.219622   72683 start.go:360] acquireMachinesLock for custom-flannel-030585: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:47:10.546439   68772 api_server.go:253] Checking apiserver healthz at https://192.168.50.55:8443/healthz ...
	I1209 23:47:10.553874   68772 api_server.go:279] https://192.168.50.55:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:47:10.553909   68772 api_server.go:103] status: https://192.168.50.55:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:47:11.046177   68772 api_server.go:253] Checking apiserver healthz at https://192.168.50.55:8443/healthz ...
	I1209 23:47:11.057547   68772 api_server.go:279] https://192.168.50.55:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:47:11.057580   68772 api_server.go:103] status: https://192.168.50.55:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:47:11.547134   68772 api_server.go:253] Checking apiserver healthz at https://192.168.50.55:8443/healthz ...
	I1209 23:47:11.552769   68772 api_server.go:279] https://192.168.50.55:8443/healthz returned 200:
	ok
	I1209 23:47:11.563675   68772 api_server.go:141] control plane version: v1.31.2
	I1209 23:47:11.563706   68772 api_server.go:131] duration metric: took 4.517684716s to wait for apiserver health ...
	I1209 23:47:11.563717   68772 cni.go:84] Creating CNI manager for ""
	I1209 23:47:11.563725   68772 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:47:11.565089   68772 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 23:47:11.566063   68772 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 23:47:11.578792   68772 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 23:47:11.597957   68772 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:47:11.598047   68772 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 23:47:11.598076   68772 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 23:47:11.617474   68772 system_pods.go:59] 8 kube-system pods found
	I1209 23:47:11.617520   68772 system_pods.go:61] "coredns-7c65d6cfc9-rp5pj" [6c294c6c-ce67-47d7-bf10-7b3524573f3c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 23:47:11.617540   68772 system_pods.go:61] "coredns-7c65d6cfc9-v78r7" [b7e31f28-d33f-42fe-87f5-e477a8f2f2e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 23:47:11.617554   68772 system_pods.go:61] "etcd-kubernetes-upgrade-996806" [0522b2c7-563f-4385-9fef-92d6492a528d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 23:47:11.617565   68772 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-996806" [cbdf93a7-2aee-4a8f-9be5-7743cd0ae5d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 23:47:11.617581   68772 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-996806" [2d0a314a-d0b1-43f3-a049-e168b32b5058] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 23:47:11.617589   68772 system_pods.go:61] "kube-proxy-kn7vn" [6b740206-c18e-4c47-a382-2b44ed1644da] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 23:47:11.617601   68772 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-996806" [fb8927cd-02c2-4271-8fc1-f5810473af34] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 23:47:11.617612   68772 system_pods.go:61] "storage-provisioner" [fc0434f5-28b9-442f-bd64-5281960fc1dc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 23:47:11.617626   68772 system_pods.go:74] duration metric: took 19.640843ms to wait for pod list to return data ...
	I1209 23:47:11.617639   68772 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:47:11.624106   68772 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 23:47:11.624139   68772 node_conditions.go:123] node cpu capacity is 2
	I1209 23:47:11.624152   68772 node_conditions.go:105] duration metric: took 6.507544ms to run NodePressure ...
	I1209 23:47:11.624173   68772 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:47:11.964580   68772 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 23:47:11.977458   68772 ops.go:34] apiserver oom_adj: -16
	I1209 23:47:11.977480   68772 kubeadm.go:597] duration metric: took 8.081684037s to restartPrimaryControlPlane
	I1209 23:47:11.977489   68772 kubeadm.go:394] duration metric: took 8.190699336s to StartCluster
	I1209 23:47:11.977505   68772 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:47:11.977584   68772 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:47:11.978759   68772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:47:11.979069   68772 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.55 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:47:11.979124   68772 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 23:47:11.979265   68772 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-996806"
	I1209 23:47:11.979294   68772 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-996806"
	W1209 23:47:11.979303   68772 addons.go:243] addon storage-provisioner should already be in state true
	I1209 23:47:11.979295   68772 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-996806"
	I1209 23:47:11.979322   68772 config.go:182] Loaded profile config "kubernetes-upgrade-996806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:47:11.979362   68772 host.go:66] Checking if "kubernetes-upgrade-996806" exists ...
	I1209 23:47:11.979326   68772 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-996806"
	I1209 23:47:11.979717   68772 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:47:11.979762   68772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:47:11.979825   68772 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:47:11.979883   68772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:47:11.980837   68772 out.go:177] * Verifying Kubernetes components...
	I1209 23:47:11.982220   68772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:47:11.998789   68772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40495
	I1209 23:47:11.999353   68772 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:47:11.999664   68772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33893
	I1209 23:47:11.999914   68772 main.go:141] libmachine: Using API Version  1
	I1209 23:47:11.999931   68772 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:47:12.000159   68772 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:47:12.000285   68772 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:47:12.000500   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetState
	I1209 23:47:12.000636   68772 main.go:141] libmachine: Using API Version  1
	I1209 23:47:12.000655   68772 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:47:12.001046   68772 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:47:12.001590   68772 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:47:12.001628   68772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:47:12.003643   68772 kapi.go:59] client config for kubernetes-upgrade-996806: &rest.Config{Host:"https://192.168.50.55:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/client.crt", KeyFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kubernetes-upgrade-996806/client.key", CAFile:"/home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c9a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 23:47:12.004010   68772 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-996806"
	W1209 23:47:12.004027   68772 addons.go:243] addon default-storageclass should already be in state true
	I1209 23:47:12.004053   68772 host.go:66] Checking if "kubernetes-upgrade-996806" exists ...
	I1209 23:47:12.004411   68772 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:47:12.004456   68772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:47:12.022439   68772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I1209 23:47:12.022880   68772 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:47:12.023200   68772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39301
	I1209 23:47:12.023584   68772 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:47:12.023994   68772 main.go:141] libmachine: Using API Version  1
	I1209 23:47:12.024008   68772 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:47:12.024105   68772 main.go:141] libmachine: Using API Version  1
	I1209 23:47:12.024126   68772 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:47:12.024389   68772 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:47:12.024459   68772 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:47:12.024609   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetState
	I1209 23:47:12.024924   68772 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:47:12.024951   68772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:47:12.026487   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .DriverName
	I1209 23:47:12.028566   68772 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:47:12.029996   68772 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:47:12.030017   68772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 23:47:12.030036   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHHostname
	I1209 23:47:12.033627   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:47:12.034160   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:d6", ip: ""} in network mk-kubernetes-upgrade-996806: {Iface:virbr2 ExpiryTime:2024-12-10 00:44:26 +0000 UTC Type:0 Mac:52:54:00:e4:8f:d6 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-996806 Clientid:01:52:54:00:e4:8f:d6}
	I1209 23:47:12.034191   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:47:12.034528   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHPort
	I1209 23:47:12.034735   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHKeyPath
	I1209 23:47:12.034947   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHUsername
	I1209 23:47:12.035146   68772 sshutil.go:53] new ssh client: &{IP:192.168.50.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/kubernetes-upgrade-996806/id_rsa Username:docker}
	I1209 23:47:12.048253   68772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45621
	I1209 23:47:12.048716   68772 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:47:12.049379   68772 main.go:141] libmachine: Using API Version  1
	I1209 23:47:12.049405   68772 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:47:12.049680   68772 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:47:12.049880   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetState
	I1209 23:47:12.051922   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .DriverName
	I1209 23:47:12.052146   68772 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 23:47:12.052161   68772 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 23:47:12.052178   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHHostname
	I1209 23:47:12.055498   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:47:12.056065   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:d6", ip: ""} in network mk-kubernetes-upgrade-996806: {Iface:virbr2 ExpiryTime:2024-12-10 00:44:26 +0000 UTC Type:0 Mac:52:54:00:e4:8f:d6 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-996806 Clientid:01:52:54:00:e4:8f:d6}
	I1209 23:47:12.056094   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | domain kubernetes-upgrade-996806 has defined IP address 192.168.50.55 and MAC address 52:54:00:e4:8f:d6 in network mk-kubernetes-upgrade-996806
	I1209 23:47:12.056198   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHPort
	I1209 23:47:12.056337   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHKeyPath
	I1209 23:47:12.056444   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .GetSSHUsername
	I1209 23:47:12.056518   68772 sshutil.go:53] new ssh client: &{IP:192.168.50.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/kubernetes-upgrade-996806/id_rsa Username:docker}
	I1209 23:47:12.208726   68772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:47:12.225482   68772 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:47:12.225567   68772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:47:12.247503   68772 api_server.go:72] duration metric: took 268.388416ms to wait for apiserver process to appear ...
	I1209 23:47:12.247528   68772 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:47:12.247550   68772 api_server.go:253] Checking apiserver healthz at https://192.168.50.55:8443/healthz ...
	I1209 23:47:12.252965   68772 api_server.go:279] https://192.168.50.55:8443/healthz returned 200:
	ok
	I1209 23:47:12.253918   68772 api_server.go:141] control plane version: v1.31.2
	I1209 23:47:12.253939   68772 api_server.go:131] duration metric: took 6.404929ms to wait for apiserver health ...
	I1209 23:47:12.253946   68772 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:47:12.260727   68772 system_pods.go:59] 8 kube-system pods found
	I1209 23:47:12.260754   68772 system_pods.go:61] "coredns-7c65d6cfc9-rp5pj" [6c294c6c-ce67-47d7-bf10-7b3524573f3c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 23:47:12.260761   68772 system_pods.go:61] "coredns-7c65d6cfc9-v78r7" [b7e31f28-d33f-42fe-87f5-e477a8f2f2e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 23:47:12.260769   68772 system_pods.go:61] "etcd-kubernetes-upgrade-996806" [0522b2c7-563f-4385-9fef-92d6492a528d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 23:47:12.260775   68772 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-996806" [cbdf93a7-2aee-4a8f-9be5-7743cd0ae5d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 23:47:12.260783   68772 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-996806" [2d0a314a-d0b1-43f3-a049-e168b32b5058] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 23:47:12.260790   68772 system_pods.go:61] "kube-proxy-kn7vn" [6b740206-c18e-4c47-a382-2b44ed1644da] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 23:47:12.260796   68772 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-996806" [fb8927cd-02c2-4271-8fc1-f5810473af34] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 23:47:12.260804   68772 system_pods.go:61] "storage-provisioner" [fc0434f5-28b9-442f-bd64-5281960fc1dc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 23:47:12.260810   68772 system_pods.go:74] duration metric: took 6.85888ms to wait for pod list to return data ...
	I1209 23:47:12.260824   68772 kubeadm.go:582] duration metric: took 281.712654ms to wait for: map[apiserver:true system_pods:true]
	I1209 23:47:12.260837   68772 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:47:12.263075   68772 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 23:47:12.263096   68772 node_conditions.go:123] node cpu capacity is 2
	I1209 23:47:12.263106   68772 node_conditions.go:105] duration metric: took 2.264625ms to run NodePressure ...
	I1209 23:47:12.263116   68772 start.go:241] waiting for startup goroutines ...
	I1209 23:47:12.351384   68772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:47:12.390472   68772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 23:47:13.393859   68772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.042440501s)
	I1209 23:47:13.393960   68772 main.go:141] libmachine: Making call to close driver server
	I1209 23:47:13.393884   68772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.003384918s)
	I1209 23:47:13.393994   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .Close
	I1209 23:47:13.394015   68772 main.go:141] libmachine: Making call to close driver server
	I1209 23:47:13.394087   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .Close
	I1209 23:47:13.394458   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | Closing plugin on server side
	I1209 23:47:13.394468   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | Closing plugin on server side
	I1209 23:47:13.394479   68772 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:47:13.394493   68772 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:47:13.394495   68772 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:47:13.394505   68772 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:47:13.394520   68772 main.go:141] libmachine: Making call to close driver server
	I1209 23:47:13.394527   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .Close
	I1209 23:47:13.394510   68772 main.go:141] libmachine: Making call to close driver server
	I1209 23:47:13.394651   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .Close
	I1209 23:47:13.394746   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | Closing plugin on server side
	I1209 23:47:13.394747   68772 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:47:13.394771   68772 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:47:13.395202   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) DBG | Closing plugin on server side
	I1209 23:47:13.395234   68772 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:47:13.395240   68772 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:47:13.404485   68772 main.go:141] libmachine: Making call to close driver server
	I1209 23:47:13.404508   68772 main.go:141] libmachine: (kubernetes-upgrade-996806) Calling .Close
	I1209 23:47:13.404839   68772 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:47:13.404869   68772 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:47:13.491211   68772 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1209 23:47:13.561712   68772 addons.go:510] duration metric: took 1.582580463s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 23:47:13.561795   68772 start.go:246] waiting for cluster config update ...
	I1209 23:47:13.561811   68772 start.go:255] writing updated cluster config ...
	I1209 23:47:13.562115   68772 ssh_runner.go:195] Run: rm -f paused
	I1209 23:47:13.612396   68772 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 23:47:13.707449   68772 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-996806" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.552132818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788034552105693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=247d9b92-6245-40d1-b8db-aa4d92a88513 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.553488554Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60212103-0437-4449-8a6e-39e444064c72 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.553568577Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60212103-0437-4449-8a6e-39e444064c72 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.553881430Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7416be3f97403c9804cb105b7240e80d38be4d129fee16dace263c539dd809ca,PodSandboxId:895894e06ec60da1b07a932255556e75b9e4f2af678750ebf03b54b414e620b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733788030863540319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rp5pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c294c6c-ce67-47d7-bf10-7b3524573f3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2d2f8a619eace3e0cdd082c3b83b82082f3f176747318ccd5dc09d0d0446d91,PodSandboxId:51540dfe4e57101e108cc513dc2dfd7ce6253b58df964a970f06aea8b0af297d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733788030893541430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v78r7,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b7e31f28-d33f-42fe-87f5-e477a8f2f2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6db0e4485b2c02a4aaca1681d5018d898ede40dccc75d22c0b5e6f3d661254ed,PodSandboxId:711b7a8677d579994c5e45cb26ab88ebfa34001d17747f7f02a50cf67082f84b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,
State:CONTAINER_RUNNING,CreatedAt:1733788026699686898,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee6a97421f07db5483d4a92f31084ace,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:124bf973d4fdd7823fffb39e4a88700800c61e672c5e34d2881558414e7f816b,PodSandboxId:f4cb157528384a5d41901783181067b60c9f532256f65f007baa8fbfaf948a68,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,State:CONTAINER_RUNNING,CreatedAt:1733788026672110345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 640808734d51780f3fabaa14e1ae6e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07cbf04fbef958986dff652501b5f44e1e71f31a9a89928aca727b4ba284efa,PodSandboxId:380ec30e3f9afb3980b85505c11f188a7ad57baf8b4ae17848c806826af6a063,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNIN
G,CreatedAt:1733788026646851147,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdde483e30ced9ef56abee406e241ef3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e549cd2ed59bf0a7777745a0a2fe94433a229c4e13034947450e637d2d205e,PodSandboxId:3ebf23f29c9d32da164232bd4a4e3da24dc7500d3dd440128c608fffda2ca0e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,Cre
atedAt:1733788026653053337,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 235574d57de4ad689fde2516d5b163c0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a30eda4d616612ae1b9923a2d3da3ac09bf1aef2035467ff69c48a23dcc8267,PodSandboxId:faf526d1ef40eb8b731b24318aadd53e9338c10a08b6d9c3da9e7bac3a517caa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:173378
7931443176833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v78r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7e31f28-d33f-42fe-87f5-e477a8f2f2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec509d79994a9fa971bcee9bdebde49aa78b9eea26e5ac437bd849244c8086a,PodSandboxId:8e3a4ad5863a95d895fed680daccea790832959bb6914eca857b5538e626c22d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675917
90e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733787931560296466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rp5pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c294c6c-ce67-47d7-bf10-7b3524573f3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cb31dd6d6f5e73595ba02ec2c7603363e9c7ebe01a06e2e59f66c919d71e0ad,PodSandboxId:0
09e86724fd0178c8802b17080e6ad724d490020a69e1aed74b6659d3d33bd29,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733787930216151395,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdde483e30ced9ef56abee406e241ef3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:907629fb1e57d8e220943dfcc3be03fcbf254bd86303941d3f2f84294fff2150,PodSandboxId:67e23dc
6bc805d15caaa62ff318b814d5916628fba5cea3615cbf1ee1062b04c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733787930173890753,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee6a97421f07db5483d4a92f31084ace,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6413ccbd5bef61c269aca40a19f8f526400c949e8a99e4ab000a74b02653fc43
,PodSandboxId:8eeab68b4321f130e40cfac41e38a12f8a974ab6e4e933e6214be542b3b97472,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733787930153591474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 640808734d51780f3fabaa14e1ae6e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3072a8a1aca3b53833ead8a6d827cc2bfcce738bd98bb25a4fa41f4f7640d3f3,PodSandboxId:aaed81cf665aa78f66ba29
2ddcd1729357d68f523b1b1cdc61c156559cc48fc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733787930028203955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 235574d57de4ad689fde2516d5b163c0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4684e841001684b0a88efea522e4d83851163f561b8275f9217e8dcd839425f1,PodSandboxId:3025e27807430a410ef32ad3480f
82ed2704afdd6355cf92b1226a2114395364,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733787897431153794,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc0434f5-28b9-442f-bd64-5281960fc1dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4eb7cfde4c7e2b87254bfbc9e3d1cc444bca2277b127476f09b81baf7190d6,PodSandboxId:2d645e9a30a25feb03074d953022706bbb3ab85b0
a09e7901f3eeebcad48ef49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733787896573198602,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kn7vn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b740206-c18e-4c47-a382-2b44ed1644da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60212103-0437-4449-8a6e-39e444064c72 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.602581551Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d18bc22-7ae4-41d4-961c-e7acdf600d6f name=/runtime.v1.RuntimeService/Version
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.602678819Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d18bc22-7ae4-41d4-961c-e7acdf600d6f name=/runtime.v1.RuntimeService/Version
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.603775296Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6bda6cb-da60-4a3f-98ae-4d5bba1834bb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.604245433Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788034604220320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6bda6cb-da60-4a3f-98ae-4d5bba1834bb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.604867039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=522cefc5-d603-447e-a4f7-cf40dcd69dde name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.604936481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=522cefc5-d603-447e-a4f7-cf40dcd69dde name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.605301305Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7416be3f97403c9804cb105b7240e80d38be4d129fee16dace263c539dd809ca,PodSandboxId:895894e06ec60da1b07a932255556e75b9e4f2af678750ebf03b54b414e620b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733788030863540319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rp5pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c294c6c-ce67-47d7-bf10-7b3524573f3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2d2f8a619eace3e0cdd082c3b83b82082f3f176747318ccd5dc09d0d0446d91,PodSandboxId:51540dfe4e57101e108cc513dc2dfd7ce6253b58df964a970f06aea8b0af297d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733788030893541430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v78r7,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b7e31f28-d33f-42fe-87f5-e477a8f2f2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6db0e4485b2c02a4aaca1681d5018d898ede40dccc75d22c0b5e6f3d661254ed,PodSandboxId:711b7a8677d579994c5e45cb26ab88ebfa34001d17747f7f02a50cf67082f84b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,
State:CONTAINER_RUNNING,CreatedAt:1733788026699686898,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee6a97421f07db5483d4a92f31084ace,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:124bf973d4fdd7823fffb39e4a88700800c61e672c5e34d2881558414e7f816b,PodSandboxId:f4cb157528384a5d41901783181067b60c9f532256f65f007baa8fbfaf948a68,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,State:CONTAINER_RUNNING,CreatedAt:1733788026672110345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 640808734d51780f3fabaa14e1ae6e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07cbf04fbef958986dff652501b5f44e1e71f31a9a89928aca727b4ba284efa,PodSandboxId:380ec30e3f9afb3980b85505c11f188a7ad57baf8b4ae17848c806826af6a063,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNIN
G,CreatedAt:1733788026646851147,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdde483e30ced9ef56abee406e241ef3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e549cd2ed59bf0a7777745a0a2fe94433a229c4e13034947450e637d2d205e,PodSandboxId:3ebf23f29c9d32da164232bd4a4e3da24dc7500d3dd440128c608fffda2ca0e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,Cre
atedAt:1733788026653053337,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 235574d57de4ad689fde2516d5b163c0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a30eda4d616612ae1b9923a2d3da3ac09bf1aef2035467ff69c48a23dcc8267,PodSandboxId:faf526d1ef40eb8b731b24318aadd53e9338c10a08b6d9c3da9e7bac3a517caa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:173378
7931443176833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v78r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7e31f28-d33f-42fe-87f5-e477a8f2f2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec509d79994a9fa971bcee9bdebde49aa78b9eea26e5ac437bd849244c8086a,PodSandboxId:8e3a4ad5863a95d895fed680daccea790832959bb6914eca857b5538e626c22d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675917
90e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733787931560296466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rp5pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c294c6c-ce67-47d7-bf10-7b3524573f3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cb31dd6d6f5e73595ba02ec2c7603363e9c7ebe01a06e2e59f66c919d71e0ad,PodSandboxId:0
09e86724fd0178c8802b17080e6ad724d490020a69e1aed74b6659d3d33bd29,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733787930216151395,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdde483e30ced9ef56abee406e241ef3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:907629fb1e57d8e220943dfcc3be03fcbf254bd86303941d3f2f84294fff2150,PodSandboxId:67e23dc
6bc805d15caaa62ff318b814d5916628fba5cea3615cbf1ee1062b04c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733787930173890753,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee6a97421f07db5483d4a92f31084ace,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6413ccbd5bef61c269aca40a19f8f526400c949e8a99e4ab000a74b02653fc43
,PodSandboxId:8eeab68b4321f130e40cfac41e38a12f8a974ab6e4e933e6214be542b3b97472,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733787930153591474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 640808734d51780f3fabaa14e1ae6e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3072a8a1aca3b53833ead8a6d827cc2bfcce738bd98bb25a4fa41f4f7640d3f3,PodSandboxId:aaed81cf665aa78f66ba29
2ddcd1729357d68f523b1b1cdc61c156559cc48fc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733787930028203955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 235574d57de4ad689fde2516d5b163c0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4684e841001684b0a88efea522e4d83851163f561b8275f9217e8dcd839425f1,PodSandboxId:3025e27807430a410ef32ad3480f
82ed2704afdd6355cf92b1226a2114395364,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733787897431153794,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc0434f5-28b9-442f-bd64-5281960fc1dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4eb7cfde4c7e2b87254bfbc9e3d1cc444bca2277b127476f09b81baf7190d6,PodSandboxId:2d645e9a30a25feb03074d953022706bbb3ab85b0
a09e7901f3eeebcad48ef49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733787896573198602,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kn7vn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b740206-c18e-4c47-a382-2b44ed1644da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=522cefc5-d603-447e-a4f7-cf40dcd69dde name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.656851514Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c87885a5-05f2-489e-951b-ce5b662ed0d1 name=/runtime.v1.RuntimeService/Version
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.656929406Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c87885a5-05f2-489e-951b-ce5b662ed0d1 name=/runtime.v1.RuntimeService/Version
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.658269274Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e7deb15-9c79-4d1e-881e-3e4e3e0b1523 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.658773514Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788034658742598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e7deb15-9c79-4d1e-881e-3e4e3e0b1523 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.659553961Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2da46cd-57d3-4ea3-adbd-801f25099403 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.659639957Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2da46cd-57d3-4ea3-adbd-801f25099403 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.660601324Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7416be3f97403c9804cb105b7240e80d38be4d129fee16dace263c539dd809ca,PodSandboxId:895894e06ec60da1b07a932255556e75b9e4f2af678750ebf03b54b414e620b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733788030863540319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rp5pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c294c6c-ce67-47d7-bf10-7b3524573f3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2d2f8a619eace3e0cdd082c3b83b82082f3f176747318ccd5dc09d0d0446d91,PodSandboxId:51540dfe4e57101e108cc513dc2dfd7ce6253b58df964a970f06aea8b0af297d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733788030893541430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v78r7,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b7e31f28-d33f-42fe-87f5-e477a8f2f2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6db0e4485b2c02a4aaca1681d5018d898ede40dccc75d22c0b5e6f3d661254ed,PodSandboxId:711b7a8677d579994c5e45cb26ab88ebfa34001d17747f7f02a50cf67082f84b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,
State:CONTAINER_RUNNING,CreatedAt:1733788026699686898,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee6a97421f07db5483d4a92f31084ace,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:124bf973d4fdd7823fffb39e4a88700800c61e672c5e34d2881558414e7f816b,PodSandboxId:f4cb157528384a5d41901783181067b60c9f532256f65f007baa8fbfaf948a68,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,State:CONTAINER_RUNNING,CreatedAt:1733788026672110345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 640808734d51780f3fabaa14e1ae6e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07cbf04fbef958986dff652501b5f44e1e71f31a9a89928aca727b4ba284efa,PodSandboxId:380ec30e3f9afb3980b85505c11f188a7ad57baf8b4ae17848c806826af6a063,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNIN
G,CreatedAt:1733788026646851147,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdde483e30ced9ef56abee406e241ef3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e549cd2ed59bf0a7777745a0a2fe94433a229c4e13034947450e637d2d205e,PodSandboxId:3ebf23f29c9d32da164232bd4a4e3da24dc7500d3dd440128c608fffda2ca0e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,Cre
atedAt:1733788026653053337,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 235574d57de4ad689fde2516d5b163c0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a30eda4d616612ae1b9923a2d3da3ac09bf1aef2035467ff69c48a23dcc8267,PodSandboxId:faf526d1ef40eb8b731b24318aadd53e9338c10a08b6d9c3da9e7bac3a517caa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:173378
7931443176833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v78r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7e31f28-d33f-42fe-87f5-e477a8f2f2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec509d79994a9fa971bcee9bdebde49aa78b9eea26e5ac437bd849244c8086a,PodSandboxId:8e3a4ad5863a95d895fed680daccea790832959bb6914eca857b5538e626c22d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675917
90e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733787931560296466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rp5pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c294c6c-ce67-47d7-bf10-7b3524573f3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cb31dd6d6f5e73595ba02ec2c7603363e9c7ebe01a06e2e59f66c919d71e0ad,PodSandboxId:0
09e86724fd0178c8802b17080e6ad724d490020a69e1aed74b6659d3d33bd29,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733787930216151395,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdde483e30ced9ef56abee406e241ef3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:907629fb1e57d8e220943dfcc3be03fcbf254bd86303941d3f2f84294fff2150,PodSandboxId:67e23dc
6bc805d15caaa62ff318b814d5916628fba5cea3615cbf1ee1062b04c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733787930173890753,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee6a97421f07db5483d4a92f31084ace,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6413ccbd5bef61c269aca40a19f8f526400c949e8a99e4ab000a74b02653fc43
,PodSandboxId:8eeab68b4321f130e40cfac41e38a12f8a974ab6e4e933e6214be542b3b97472,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733787930153591474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 640808734d51780f3fabaa14e1ae6e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3072a8a1aca3b53833ead8a6d827cc2bfcce738bd98bb25a4fa41f4f7640d3f3,PodSandboxId:aaed81cf665aa78f66ba29
2ddcd1729357d68f523b1b1cdc61c156559cc48fc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733787930028203955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 235574d57de4ad689fde2516d5b163c0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4684e841001684b0a88efea522e4d83851163f561b8275f9217e8dcd839425f1,PodSandboxId:3025e27807430a410ef32ad3480f
82ed2704afdd6355cf92b1226a2114395364,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733787897431153794,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc0434f5-28b9-442f-bd64-5281960fc1dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4eb7cfde4c7e2b87254bfbc9e3d1cc444bca2277b127476f09b81baf7190d6,PodSandboxId:2d645e9a30a25feb03074d953022706bbb3ab85b0
a09e7901f3eeebcad48ef49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733787896573198602,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kn7vn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b740206-c18e-4c47-a382-2b44ed1644da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2da46cd-57d3-4ea3-adbd-801f25099403 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.708071453Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7b0f2ca-3353-4c7b-b987-1cf0130fff09 name=/runtime.v1.RuntimeService/Version
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.708189436Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7b0f2ca-3353-4c7b-b987-1cf0130fff09 name=/runtime.v1.RuntimeService/Version
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.709796636Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c402432f-7ba1-485d-8930-e021a5376d45 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.710379915Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788034710351550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c402432f-7ba1-485d-8930-e021a5376d45 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.711396356Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e82934d7-1e19-4138-b356-81e3a0240f86 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.711464194Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e82934d7-1e19-4138-b356-81e3a0240f86 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:47:14 kubernetes-upgrade-996806 crio[3171]: time="2024-12-09 23:47:14.713136901Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7416be3f97403c9804cb105b7240e80d38be4d129fee16dace263c539dd809ca,PodSandboxId:895894e06ec60da1b07a932255556e75b9e4f2af678750ebf03b54b414e620b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733788030863540319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rp5pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c294c6c-ce67-47d7-bf10-7b3524573f3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2d2f8a619eace3e0cdd082c3b83b82082f3f176747318ccd5dc09d0d0446d91,PodSandboxId:51540dfe4e57101e108cc513dc2dfd7ce6253b58df964a970f06aea8b0af297d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733788030893541430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v78r7,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b7e31f28-d33f-42fe-87f5-e477a8f2f2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6db0e4485b2c02a4aaca1681d5018d898ede40dccc75d22c0b5e6f3d661254ed,PodSandboxId:711b7a8677d579994c5e45cb26ab88ebfa34001d17747f7f02a50cf67082f84b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,
State:CONTAINER_RUNNING,CreatedAt:1733788026699686898,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee6a97421f07db5483d4a92f31084ace,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:124bf973d4fdd7823fffb39e4a88700800c61e672c5e34d2881558414e7f816b,PodSandboxId:f4cb157528384a5d41901783181067b60c9f532256f65f007baa8fbfaf948a68,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,State:CONTAINER_RUNNING,CreatedAt:1733788026672110345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 640808734d51780f3fabaa14e1ae6e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c07cbf04fbef958986dff652501b5f44e1e71f31a9a89928aca727b4ba284efa,PodSandboxId:380ec30e3f9afb3980b85505c11f188a7ad57baf8b4ae17848c806826af6a063,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNIN
G,CreatedAt:1733788026646851147,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdde483e30ced9ef56abee406e241ef3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e549cd2ed59bf0a7777745a0a2fe94433a229c4e13034947450e637d2d205e,PodSandboxId:3ebf23f29c9d32da164232bd4a4e3da24dc7500d3dd440128c608fffda2ca0e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,Cre
atedAt:1733788026653053337,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 235574d57de4ad689fde2516d5b163c0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a30eda4d616612ae1b9923a2d3da3ac09bf1aef2035467ff69c48a23dcc8267,PodSandboxId:faf526d1ef40eb8b731b24318aadd53e9338c10a08b6d9c3da9e7bac3a517caa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:173378
7931443176833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v78r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7e31f28-d33f-42fe-87f5-e477a8f2f2e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec509d79994a9fa971bcee9bdebde49aa78b9eea26e5ac437bd849244c8086a,PodSandboxId:8e3a4ad5863a95d895fed680daccea790832959bb6914eca857b5538e626c22d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675917
90e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733787931560296466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rp5pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c294c6c-ce67-47d7-bf10-7b3524573f3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cb31dd6d6f5e73595ba02ec2c7603363e9c7ebe01a06e2e59f66c919d71e0ad,PodSandboxId:0
09e86724fd0178c8802b17080e6ad724d490020a69e1aed74b6659d3d33bd29,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733787930216151395,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdde483e30ced9ef56abee406e241ef3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:907629fb1e57d8e220943dfcc3be03fcbf254bd86303941d3f2f84294fff2150,PodSandboxId:67e23dc
6bc805d15caaa62ff318b814d5916628fba5cea3615cbf1ee1062b04c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733787930173890753,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee6a97421f07db5483d4a92f31084ace,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6413ccbd5bef61c269aca40a19f8f526400c949e8a99e4ab000a74b02653fc43
,PodSandboxId:8eeab68b4321f130e40cfac41e38a12f8a974ab6e4e933e6214be542b3b97472,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733787930153591474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 640808734d51780f3fabaa14e1ae6e0d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3072a8a1aca3b53833ead8a6d827cc2bfcce738bd98bb25a4fa41f4f7640d3f3,PodSandboxId:aaed81cf665aa78f66ba29
2ddcd1729357d68f523b1b1cdc61c156559cc48fc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733787930028203955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-996806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 235574d57de4ad689fde2516d5b163c0,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4684e841001684b0a88efea522e4d83851163f561b8275f9217e8dcd839425f1,PodSandboxId:3025e27807430a410ef32ad3480f
82ed2704afdd6355cf92b1226a2114395364,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733787897431153794,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc0434f5-28b9-442f-bd64-5281960fc1dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4eb7cfde4c7e2b87254bfbc9e3d1cc444bca2277b127476f09b81baf7190d6,PodSandboxId:2d645e9a30a25feb03074d953022706bbb3ab85b0
a09e7901f3eeebcad48ef49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733787896573198602,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kn7vn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b740206-c18e-4c47-a382-2b44ed1644da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e82934d7-1e19-4138-b356-81e3a0240f86 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c2d2f8a619eac       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   4 seconds ago        Running             coredns                   2                   51540dfe4e571       coredns-7c65d6cfc9-v78r7
	7416be3f97403       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   4 seconds ago        Running             coredns                   2                   895894e06ec60       coredns-7c65d6cfc9-rp5pj
	6db0e4485b2c0       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   8 seconds ago        Running             kube-controller-manager   2                   711b7a8677d57       kube-controller-manager-kubernetes-upgrade-996806
	124bf973d4fdd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   8 seconds ago        Running             etcd                      2                   f4cb157528384       etcd-kubernetes-upgrade-996806
	28e549cd2ed59       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   8 seconds ago        Running             kube-apiserver            2                   3ebf23f29c9d3       kube-apiserver-kubernetes-upgrade-996806
	c07cbf04fbef9       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   8 seconds ago        Running             kube-scheduler            2                   380ec30e3f9af       kube-scheduler-kubernetes-upgrade-996806
	0ec509d79994a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   1                   8e3a4ad5863a9       coredns-7c65d6cfc9-rp5pj
	8a30eda4d6166       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   1                   faf526d1ef40e       coredns-7c65d6cfc9-v78r7
	4cb31dd6d6f5e       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   About a minute ago   Exited              kube-scheduler            1                   009e86724fd01       kube-scheduler-kubernetes-upgrade-996806
	907629fb1e57d       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   About a minute ago   Exited              kube-controller-manager   1                   67e23dc6bc805       kube-controller-manager-kubernetes-upgrade-996806
	6413ccbd5bef6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      1                   8eeab68b4321f       etcd-kubernetes-upgrade-996806
	3072a8a1aca3b       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   About a minute ago   Exited              kube-apiserver            1                   aaed81cf665aa       kube-apiserver-kubernetes-upgrade-996806
	4684e84100168       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   2 minutes ago        Exited              storage-provisioner       0                   3025e27807430       storage-provisioner
	6b4eb7cfde4c7       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   2 minutes ago        Exited              kube-proxy                0                   2d645e9a30a25       kube-proxy-kn7vn
	
	
	==> coredns [0ec509d79994a9fa971bcee9bdebde49aa78b9eea26e5ac437bd849244c8086a] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7416be3f97403c9804cb105b7240e80d38be4d129fee16dace263c539dd809ca] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [8a30eda4d616612ae1b9923a2d3da3ac09bf1aef2035467ff69c48a23dcc8267] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c2d2f8a619eace3e0cdd082c3b83b82082f3f176747318ccd5dc09d0d0446d91] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-996806
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-996806
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 23:44:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-996806
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 23:47:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 23:47:09 +0000   Mon, 09 Dec 2024 23:44:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 23:47:09 +0000   Mon, 09 Dec 2024 23:44:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 23:47:09 +0000   Mon, 09 Dec 2024 23:44:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 23:47:09 +0000   Mon, 09 Dec 2024 23:44:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.55
	  Hostname:    kubernetes-upgrade-996806
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e026e3dcdbda4a09a1ab39bcd6226536
	  System UUID:                e026e3dc-dbda-4a09-a1ab-39bcd6226536
	  Boot ID:                    2ceebdf7-31df-429c-95b4-c69d011d3fdf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-rp5pj                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m19s
	  kube-system                 coredns-7c65d6cfc9-v78r7                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m19s
	  kube-system                 etcd-kubernetes-upgrade-996806                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m19s
	  kube-system                 kube-apiserver-kubernetes-upgrade-996806             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-996806    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-proxy-kn7vn                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-kubernetes-upgrade-996806             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m18s                  kube-proxy       
	  Normal  Starting                 2m33s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m30s (x8 over 2m33s)  kubelet          Node kubernetes-upgrade-996806 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m30s (x8 over 2m33s)  kubelet          Node kubernetes-upgrade-996806 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m30s (x7 over 2m33s)  kubelet          Node kubernetes-upgrade-996806 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m20s                  node-controller  Node kubernetes-upgrade-996806 event: Registered Node kubernetes-upgrade-996806 in Controller
	  Normal  Starting                 10s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x9 over 9s)        kubelet          Node kubernetes-upgrade-996806 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x7 over 9s)        kubelet          Node kubernetes-upgrade-996806 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)        kubelet          Node kubernetes-upgrade-996806 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                     kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                     node-controller  Node kubernetes-upgrade-996806 event: Registered Node kubernetes-upgrade-996806 in Controller
	
	
	==> dmesg <==
	[  +1.571540] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.385406] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.065389] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074046] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.170533] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.147258] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.272197] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +4.102572] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +2.055810] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +0.058844] kauditd_printk_skb: 158 callbacks suppressed
	[ +12.101347] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.096102] kauditd_printk_skb: 69 callbacks suppressed
	[Dec 9 23:45] kauditd_printk_skb: 108 callbacks suppressed
	[  +1.270526] systemd-fstab-generator[2788]: Ignoring "noauto" option for root device
	[  +0.228822] systemd-fstab-generator[2823]: Ignoring "noauto" option for root device
	[  +0.336551] systemd-fstab-generator[2862]: Ignoring "noauto" option for root device
	[  +0.251891] systemd-fstab-generator[2910]: Ignoring "noauto" option for root device
	[  +0.515669] systemd-fstab-generator[2985]: Ignoring "noauto" option for root device
	[Dec 9 23:47] systemd-fstab-generator[3314]: Ignoring "noauto" option for root device
	[  +0.090608] kauditd_printk_skb: 208 callbacks suppressed
	[  +2.475159] systemd-fstab-generator[3436]: Ignoring "noauto" option for root device
	[  +4.559136] kauditd_printk_skb: 86 callbacks suppressed
	[  +1.727987] systemd-fstab-generator[4314]: Ignoring "noauto" option for root device
	
	
	==> etcd [124bf973d4fdd7823fffb39e4a88700800c61e672c5e34d2881558414e7f816b] <==
	{"level":"warn","ts":"2024-12-09T23:47:13.779748Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.98694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-12-09T23:47:13.781142Z","caller":"traceutil/trace.go:171","msg":"trace[1029761470] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:486; }","duration":"221.383352ms","start":"2024-12-09T23:47:13.559745Z","end":"2024-12-09T23:47:13.781128Z","steps":["trace[1029761470] 'agreement among raft nodes before linearized reading'  (duration: 219.956712ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:47:13.779760Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"270.433814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-12-09T23:47:13.779791Z","caller":"traceutil/trace.go:171","msg":"trace[778329142] transaction","detail":"{read_only:false; response_revision:486; number_of_response:1; }","duration":"313.922503ms","start":"2024-12-09T23:47:13.465860Z","end":"2024-12-09T23:47:13.779783Z","steps":["trace[778329142] 'process raft request'  (duration: 313.039821ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:47:13.779826Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.760551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"warn","ts":"2024-12-09T23:47:13.779856Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.995585ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner\" ","response":"range_response_count:1 size:238"}
	{"level":"info","ts":"2024-12-09T23:47:13.781562Z","caller":"traceutil/trace.go:171","msg":"trace[241788541] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/daemon-set-controller; range_end:; response_count:1; response_revision:486; }","duration":"272.240066ms","start":"2024-12-09T23:47:13.509311Z","end":"2024-12-09T23:47:13.781551Z","steps":["trace[241788541] 'agreement among raft nodes before linearized reading'  (duration: 270.408457ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:47:13.782762Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T23:47:13.465837Z","time spent":"315.871511ms","remote":"127.0.0.1:45990","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1406,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-vhxw8\" mod_revision:374 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-vhxw8\" value_size:1347 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-vhxw8\" > >"}
	{"level":"info","ts":"2024-12-09T23:47:13.794951Z","caller":"traceutil/trace.go:171","msg":"trace[1789544787] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:486; }","duration":"135.870047ms","start":"2024-12-09T23:47:13.659061Z","end":"2024-12-09T23:47:13.794931Z","steps":["trace[1789544787] 'agreement among raft nodes before linearized reading'  (duration: 120.747572ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:47:13.795277Z","caller":"traceutil/trace.go:171","msg":"trace[1202101659] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner; range_end:; response_count:1; response_revision:486; }","duration":"186.408454ms","start":"2024-12-09T23:47:13.608856Z","end":"2024-12-09T23:47:13.795265Z","steps":["trace[1202101659] 'agreement among raft nodes before linearized reading'  (duration: 170.982418ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:47:14.054458Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.369784ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10049381993282085380 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:481 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:3994 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-12-09T23:47:14.054626Z","caller":"traceutil/trace.go:171","msg":"trace[572313285] transaction","detail":"{read_only:false; response_revision:489; number_of_response:1; }","duration":"245.473397ms","start":"2024-12-09T23:47:13.809143Z","end":"2024-12-09T23:47:14.054617Z","steps":["trace[572313285] 'process raft request'  (duration: 245.44201ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:47:14.054832Z","caller":"traceutil/trace.go:171","msg":"trace[122007622] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"256.000315ms","start":"2024-12-09T23:47:13.798823Z","end":"2024-12-09T23:47:14.054823Z","steps":["trace[122007622] 'process raft request'  (duration: 128.123527ms)","trace[122007622] 'compare'  (duration: 127.205928ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T23:47:14.054908Z","caller":"traceutil/trace.go:171","msg":"trace[835851079] linearizableReadLoop","detail":"{readStateIndex:522; appliedIndex:521; }","duration":"251.391205ms","start":"2024-12-09T23:47:13.803512Z","end":"2024-12-09T23:47:14.054903Z","steps":["trace[835851079] 'read index received'  (duration: 123.4428ms)","trace[835851079] 'applied index is now lower than readState.Index'  (duration: 127.947605ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T23:47:14.054992Z","caller":"traceutil/trace.go:171","msg":"trace[806937934] transaction","detail":"{read_only:false; response_revision:488; number_of_response:1; }","duration":"247.231379ms","start":"2024-12-09T23:47:13.807756Z","end":"2024-12-09T23:47:14.054987Z","steps":["trace[806937934] 'process raft request'  (duration: 246.776263ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:47:14.055546Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.023816ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" ","response":"range_response_count:1 size:370"}
	{"level":"info","ts":"2024-12-09T23:47:14.055574Z","caller":"traceutil/trace.go:171","msg":"trace[992482511] range","detail":"{range_begin:/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking; range_end:; response_count:1; response_revision:489; }","duration":"252.058663ms","start":"2024-12-09T23:47:13.803507Z","end":"2024-12-09T23:47:14.055566Z","steps":["trace[992482511] 'agreement among raft nodes before linearized reading'  (duration: 251.687198ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:47:14.055671Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.394314ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2897"}
	{"level":"info","ts":"2024-12-09T23:47:14.055685Z","caller":"traceutil/trace.go:171","msg":"trace[51515721] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:489; }","duration":"161.408975ms","start":"2024-12-09T23:47:13.894272Z","end":"2024-12-09T23:47:14.055681Z","steps":["trace[51515721] 'agreement among raft nodes before linearized reading'  (duration: 161.378453ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:47:14.055966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.07887ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-7c65d6cfc9\" ","response":"range_response_count:1 size:3797"}
	{"level":"info","ts":"2024-12-09T23:47:14.055987Z","caller":"traceutil/trace.go:171","msg":"trace[861808795] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-7c65d6cfc9; range_end:; response_count:1; response_revision:489; }","duration":"161.124931ms","start":"2024-12-09T23:47:13.894857Z","end":"2024-12-09T23:47:14.055982Z","steps":["trace[861808795] 'agreement among raft nodes before linearized reading'  (duration: 161.069235ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:47:14.056101Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.507809ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4058"}
	{"level":"info","ts":"2024-12-09T23:47:14.056140Z","caller":"traceutil/trace.go:171","msg":"trace[1157678459] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:489; }","duration":"161.546873ms","start":"2024-12-09T23:47:13.894587Z","end":"2024-12-09T23:47:14.056133Z","steps":["trace[1157678459] 'agreement among raft nodes before linearized reading'  (duration: 161.496261ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:47:14.056230Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.911653ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/kube-dns\" ","response":"range_response_count:1 size:1211"}
	{"level":"info","ts":"2024-12-09T23:47:14.056243Z","caller":"traceutil/trace.go:171","msg":"trace[26946620] range","detail":"{range_begin:/registry/services/specs/kube-system/kube-dns; range_end:; response_count:1; response_revision:489; }","duration":"161.926804ms","start":"2024-12-09T23:47:13.894312Z","end":"2024-12-09T23:47:14.056239Z","steps":["trace[26946620] 'agreement among raft nodes before linearized reading'  (duration: 161.895393ms)"],"step_count":1}
	
	
	==> etcd [6413ccbd5bef61c269aca40a19f8f526400c949e8a99e4ab000a74b02653fc43] <==
	{"level":"info","ts":"2024-12-09T23:45:32.261228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 became candidate at term 3"}
	{"level":"info","ts":"2024-12-09T23:45:32.261253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 received MsgVoteResp from 328c932a5e3b8b76 at term 3"}
	{"level":"info","ts":"2024-12-09T23:45:32.261290Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 became leader at term 3"}
	{"level":"info","ts":"2024-12-09T23:45:32.261315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 328c932a5e3b8b76 elected leader 328c932a5e3b8b76 at term 3"}
	{"level":"info","ts":"2024-12-09T23:45:32.265306Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"328c932a5e3b8b76","local-member-attributes":"{Name:kubernetes-upgrade-996806 ClientURLs:[https://192.168.50.55:2379]}","request-path":"/0/members/328c932a5e3b8b76/attributes","cluster-id":"e0630d851be0da94","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-09T23:45:32.268094Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T23:45:32.269331Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T23:45:32.295832Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-09T23:45:32.307110Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T23:45:32.310131Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-09T23:45:32.310166Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-09T23:45:32.310648Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T23:45:32.311499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.55:2379"}
	{"level":"info","ts":"2024-12-09T23:45:32.466201Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-12-09T23:45:32.466327Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-996806","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.55:2380"],"advertise-client-urls":["https://192.168.50.55:2379"]}
	{"level":"warn","ts":"2024-12-09T23:45:32.466662Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-09T23:45:32.466766Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-09T23:45:32.467098Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43902","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:43902: use of closed network connection"}
	{"level":"warn","ts":"2024-12-09T23:45:32.467125Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43910","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:43910: use of closed network connection"}
	{"level":"warn","ts":"2024-12-09T23:45:32.492233Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.55:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-09T23:45:32.492329Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.55:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-09T23:45:32.492383Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"328c932a5e3b8b76","current-leader-member-id":"328c932a5e3b8b76"}
	{"level":"info","ts":"2024-12-09T23:45:32.515930Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.55:2380"}
	{"level":"info","ts":"2024-12-09T23:45:32.517266Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.55:2380"}
	{"level":"info","ts":"2024-12-09T23:45:32.517290Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-996806","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.55:2380"],"advertise-client-urls":["https://192.168.50.55:2379"]}
	
	
	==> kernel <==
	 23:47:15 up 2 min,  0 users,  load average: 0.46, 0.21, 0.08
	Linux kubernetes-upgrade-996806 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [28e549cd2ed59bf0a7777745a0a2fe94433a229c4e13034947450e637d2d205e] <==
	I1209 23:47:09.855660       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1209 23:47:09.859789       1 shared_informer.go:320] Caches are synced for configmaps
	I1209 23:47:09.859867       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1209 23:47:09.859873       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1209 23:47:09.861168       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1209 23:47:09.861490       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1209 23:47:09.861583       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1209 23:47:09.862375       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1209 23:47:09.862760       1 aggregator.go:171] initial CRD sync complete...
	I1209 23:47:09.862825       1 autoregister_controller.go:144] Starting autoregister controller
	I1209 23:47:09.862879       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1209 23:47:09.862902       1 cache.go:39] Caches are synced for autoregister controller
	E1209 23:47:09.869449       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1209 23:47:09.882128       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1209 23:47:09.886539       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1209 23:47:09.886623       1 policy_source.go:224] refreshing policies
	I1209 23:47:09.954107       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 23:47:10.771560       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 23:47:11.744388       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1209 23:47:11.763414       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1209 23:47:11.801916       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1209 23:47:11.922989       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 23:47:11.931740       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 23:47:12.953924       1 controller.go:615] quota admission added evaluator for: endpoints
	I1209 23:47:13.464719       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [3072a8a1aca3b53833ead8a6d827cc2bfcce738bd98bb25a4fa41f4f7640d3f3] <==
	W1209 23:45:35.375890       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:35.407452       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:36.960102       1 logging.go:55] [core] [Channel #13 SubChannel #14]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:37.155071       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:37.616196       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:37.996465       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:38.089995       1 logging.go:55] [core] [Channel #15 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:38.353460       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:38.372109       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:40.448103       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:40.481868       1 logging.go:55] [core] [Channel #13 SubChannel #14]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:41.283450       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:41.796212       1 logging.go:55] [core] [Channel #15 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:42.013039       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:42.268510       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:42.438783       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:46.057072       1 logging.go:55] [core] [Channel #13 SubChannel #14]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:47.782472       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:48.484322       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:48.823355       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:48.825766       1 logging.go:55] [core] [Channel #15 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:49.009392       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:49.015858       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 23:45:52.461859       1 logging.go:55] [core] [Channel #19 SubChannel #20]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E1209 23:45:52.499450       1 run.go:72] "command failed" err="context deadline exceeded"
	
	
	==> kube-controller-manager [6db0e4485b2c02a4aaca1681d5018d898ede40dccc75d22c0b5e6f3d661254ed] <==
	I1209 23:47:13.219207       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1209 23:47:13.219230       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1209 23:47:13.219338       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-996806"
	I1209 23:47:13.222837       1 shared_informer.go:320] Caches are synced for taint
	I1209 23:47:13.222977       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1209 23:47:13.223128       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-996806"
	I1209 23:47:13.223182       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1209 23:47:13.225093       1 shared_informer.go:320] Caches are synced for attach detach
	I1209 23:47:13.225164       1 shared_informer.go:320] Caches are synced for GC
	I1209 23:47:13.227327       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1209 23:47:13.258756       1 shared_informer.go:320] Caches are synced for persistent volume
	I1209 23:47:13.260083       1 shared_informer.go:320] Caches are synced for daemon sets
	I1209 23:47:13.261250       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1209 23:47:13.261335       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1209 23:47:13.270826       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1209 23:47:13.287164       1 shared_informer.go:320] Caches are synced for disruption
	I1209 23:47:13.415365       1 shared_informer.go:320] Caches are synced for resource quota
	I1209 23:47:13.449344       1 shared_informer.go:320] Caches are synced for resource quota
	I1209 23:47:13.886103       1 shared_informer.go:320] Caches are synced for garbage collector
	I1209 23:47:13.891427       1 shared_informer.go:320] Caches are synced for garbage collector
	I1209 23:47:13.891474       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1209 23:47:14.060909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="833.494531ms"
	I1209 23:47:14.061223       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="126.348µs"
	I1209 23:47:14.449993       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="17.174113ms"
	I1209 23:47:14.451139       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="59.698µs"
	
	
	==> kube-controller-manager [907629fb1e57d8e220943dfcc3be03fcbf254bd86303941d3f2f84294fff2150] <==
	
	
	==> kube-proxy [6b4eb7cfde4c7e2b87254bfbc9e3d1cc444bca2277b127476f09b81baf7190d6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 23:44:56.794405       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 23:44:56.812032       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.55"]
	E1209 23:44:56.812222       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 23:44:56.877089       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 23:44:56.877135       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 23:44:56.877158       1 server_linux.go:169] "Using iptables Proxier"
	I1209 23:44:56.881491       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 23:44:56.881909       1 server.go:483] "Version info" version="v1.31.2"
	I1209 23:44:56.881938       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:44:56.885166       1 config.go:199] "Starting service config controller"
	I1209 23:44:56.885185       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 23:44:56.885217       1 config.go:105] "Starting endpoint slice config controller"
	I1209 23:44:56.885221       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 23:44:56.885714       1 config.go:328] "Starting node config controller"
	I1209 23:44:56.885721       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 23:44:56.985902       1 shared_informer.go:320] Caches are synced for node config
	I1209 23:44:56.985935       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 23:44:56.985949       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [4cb31dd6d6f5e73595ba02ec2c7603363e9c7ebe01a06e2e59f66c919d71e0ad] <==
	I1209 23:45:32.402156       1 serving.go:386] Generated self-signed cert in-memory
	W1209 23:45:43.022728       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.168.50.55:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W1209 23:45:43.022759       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 23:45:43.022765       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 23:45:53.507721       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1209 23:45:53.507763       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1209 23:45:53.507780       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1209 23:45:53.509894       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E1209 23:45:53.509971       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E1209 23:45:53.510643       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c07cbf04fbef958986dff652501b5f44e1e71f31a9a89928aca727b4ba284efa] <==
	I1209 23:47:07.922205       1 serving.go:386] Generated self-signed cert in-memory
	W1209 23:47:09.826465       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 23:47:09.826513       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 23:47:09.826538       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 23:47:09.826543       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 23:47:09.864578       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1209 23:47:09.864615       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:47:09.867265       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1209 23:47:09.867370       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 23:47:09.867401       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 23:47:09.867424       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 23:47:09.968176       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 23:47:09 kubernetes-upgrade-996806 kubelet[3443]: I1209 23:47:09.899737    3443 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 09 23:47:09 kubernetes-upgrade-996806 kubelet[3443]: I1209 23:47:09.900765    3443 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 09 23:47:09 kubernetes-upgrade-996806 kubelet[3443]: I1209 23:47:09.944738    3443 apiserver.go:52] "Watching apiserver"
	Dec 09 23:47:09 kubernetes-upgrade-996806 kubelet[3443]: I1209 23:47:09.973293    3443 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 09 23:47:10 kubernetes-upgrade-996806 kubelet[3443]: I1209 23:47:10.006417    3443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b740206-c18e-4c47-a382-2b44ed1644da-lib-modules\") pod \"kube-proxy-kn7vn\" (UID: \"6b740206-c18e-4c47-a382-2b44ed1644da\") " pod="kube-system/kube-proxy-kn7vn"
	Dec 09 23:47:10 kubernetes-upgrade-996806 kubelet[3443]: I1209 23:47:10.006516    3443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fc0434f5-28b9-442f-bd64-5281960fc1dc-tmp\") pod \"storage-provisioner\" (UID: \"fc0434f5-28b9-442f-bd64-5281960fc1dc\") " pod="kube-system/storage-provisioner"
	Dec 09 23:47:10 kubernetes-upgrade-996806 kubelet[3443]: I1209 23:47:10.006591    3443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b740206-c18e-4c47-a382-2b44ed1644da-xtables-lock\") pod \"kube-proxy-kn7vn\" (UID: \"6b740206-c18e-4c47-a382-2b44ed1644da\") " pod="kube-system/kube-proxy-kn7vn"
	Dec 09 23:47:10 kubernetes-upgrade-996806 kubelet[3443]: E1209 23:47:10.185983    3443 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-996806\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-996806"
	Dec 09 23:47:10 kubernetes-upgrade-996806 kubelet[3443]: E1209 23:47:10.494968    3443 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_storage-provisioner_storage-provisioner_kube-system_fc0434f5-28b9-442f-bd64-5281960fc1dc_1\" is already in use by 7f1c2db79f66983e7aacbc2ee35702a7fd4589e8f842655fecbf102e66fc4987. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="70471114781bb200236d20114b52fc1d92106552bd633f4aa0a5f83c879b87c0"
	Dec 09 23:47:10 kubernetes-upgrade-996806 kubelet[3443]: E1209 23:47:10.495177    3443 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wn5xd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevi
ce{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod storage-provisioner_kube-system(fc0434f5-28b9-442f-bd64-5281960fc1dc): CreateContainerError: the container name \"k8s_storage-provisioner_storage-provisioner_kube-system_fc0434f5-28b9-442f-bd64-5281960fc1dc_1\" is already in use by 7f1c2db79f66983e7aacbc2ee35702a7fd4589e8f842655fecbf102e66fc4987. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Dec 09 23:47:10 kubernetes-upgrade-996806 kubelet[3443]: E1209 23:47:10.496616    3443 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerError: \"the container name \\\"k8s_storage-provisioner_storage-provisioner_kube-system_fc0434f5-28b9-442f-bd64-5281960fc1dc_1\\\" is already in use by 7f1c2db79f66983e7aacbc2ee35702a7fd4589e8f842655fecbf102e66fc4987. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/storage-provisioner" podUID="fc0434f5-28b9-442f-bd64-5281960fc1dc"
	Dec 09 23:47:10 kubernetes-upgrade-996806 kubelet[3443]: E1209 23:47:10.573034    3443 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-proxy_kube-proxy-kn7vn_kube-system_6b740206-c18e-4c47-a382-2b44ed1644da_1\" is already in use by 54d656ef7ab03dbba62fdb92033258eea155b353be950db0604a7f9e76094ff1. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="d013b5dd96990aaff3bb3b0ab65eadd6ad48c295fa19b3474d6efbc0a41907e8"
	Dec 09 23:47:10 kubernetes-upgrade-996806 kubelet[3443]: E1209 23:47:10.573365    3443 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.31.2,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,Read
Only:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qbk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-proxy-kn7vn_kube-system(6b740206-c18e-4c47-a382-2b44ed1644da): CreateContainerError: the container name \"k8s_kube-proxy_kube-proxy-kn
7vn_kube-system_6b740206-c18e-4c47-a382-2b44ed1644da_1\" is already in use by 54d656ef7ab03dbba62fdb92033258eea155b353be950db0604a7f9e76094ff1. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Dec 09 23:47:10 kubernetes-upgrade-996806 kubelet[3443]: E1209 23:47:10.574596    3443 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"the container name \\\"k8s_kube-proxy_kube-proxy-kn7vn_kube-system_6b740206-c18e-4c47-a382-2b44ed1644da_1\\\" is already in use by 54d656ef7ab03dbba62fdb92033258eea155b353be950db0604a7f9e76094ff1. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-proxy-kn7vn" podUID="6b740206-c18e-4c47-a382-2b44ed1644da"
	Dec 09 23:47:11 kubernetes-upgrade-996806 kubelet[3443]: I1209 23:47:11.133470    3443 scope.go:117] "RemoveContainer" containerID="4684e841001684b0a88efea522e4d83851163f561b8275f9217e8dcd839425f1"
	Dec 09 23:47:11 kubernetes-upgrade-996806 kubelet[3443]: I1209 23:47:11.160983    3443 scope.go:117] "RemoveContainer" containerID="6b4eb7cfde4c7e2b87254bfbc9e3d1cc444bca2277b127476f09b81baf7190d6"
	Dec 09 23:47:11 kubernetes-upgrade-996806 kubelet[3443]: E1209 23:47:11.165783    3443 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_storage-provisioner_storage-provisioner_kube-system_fc0434f5-28b9-442f-bd64-5281960fc1dc_1\" is already in use by 7f1c2db79f66983e7aacbc2ee35702a7fd4589e8f842655fecbf102e66fc4987. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="70471114781bb200236d20114b52fc1d92106552bd633f4aa0a5f83c879b87c0"
	Dec 09 23:47:11 kubernetes-upgrade-996806 kubelet[3443]: E1209 23:47:11.165904    3443 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wn5xd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevi
ce{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod storage-provisioner_kube-system(fc0434f5-28b9-442f-bd64-5281960fc1dc): CreateContainerError: the container name \"k8s_storage-provisioner_storage-provisioner_kube-system_fc0434f5-28b9-442f-bd64-5281960fc1dc_1\" is already in use by 7f1c2db79f66983e7aacbc2ee35702a7fd4589e8f842655fecbf102e66fc4987. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Dec 09 23:47:11 kubernetes-upgrade-996806 kubelet[3443]: E1209 23:47:11.168240    3443 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerError: \"the container name \\\"k8s_storage-provisioner_storage-provisioner_kube-system_fc0434f5-28b9-442f-bd64-5281960fc1dc_1\\\" is already in use by 7f1c2db79f66983e7aacbc2ee35702a7fd4589e8f842655fecbf102e66fc4987. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/storage-provisioner" podUID="fc0434f5-28b9-442f-bd64-5281960fc1dc"
	Dec 09 23:47:11 kubernetes-upgrade-996806 kubelet[3443]: E1209 23:47:11.171474    3443 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-proxy_kube-proxy-kn7vn_kube-system_6b740206-c18e-4c47-a382-2b44ed1644da_1\" is already in use by 54d656ef7ab03dbba62fdb92033258eea155b353be950db0604a7f9e76094ff1. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="d013b5dd96990aaff3bb3b0ab65eadd6ad48c295fa19b3474d6efbc0a41907e8"
	Dec 09 23:47:11 kubernetes-upgrade-996806 kubelet[3443]: E1209 23:47:11.171613    3443 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.31.2,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,Read
Only:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qbk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-proxy-kn7vn_kube-system(6b740206-c18e-4c47-a382-2b44ed1644da): CreateContainerError: the container name \"k8s_kube-proxy_kube-proxy-kn
7vn_kube-system_6b740206-c18e-4c47-a382-2b44ed1644da_1\" is already in use by 54d656ef7ab03dbba62fdb92033258eea155b353be950db0604a7f9e76094ff1. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Dec 09 23:47:11 kubernetes-upgrade-996806 kubelet[3443]: E1209 23:47:11.173107    3443 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"the container name \\\"k8s_kube-proxy_kube-proxy-kn7vn_kube-system_6b740206-c18e-4c47-a382-2b44ed1644da_1\\\" is already in use by 54d656ef7ab03dbba62fdb92033258eea155b353be950db0604a7f9e76094ff1. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-proxy-kn7vn" podUID="6b740206-c18e-4c47-a382-2b44ed1644da"
	Dec 09 23:47:14 kubernetes-upgrade-996806 kubelet[3443]: I1209 23:47:14.413462    3443 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 09 23:47:16 kubernetes-upgrade-996806 kubelet[3443]: E1209 23:47:16.038436    3443 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788036036906378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:47:16 kubernetes-upgrade-996806 kubelet[3443]: E1209 23:47:16.038467    3443 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788036036906378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [4684e841001684b0a88efea522e4d83851163f561b8275f9217e8dcd839425f1] <==
	I1209 23:44:57.529665       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 23:44:57.560685       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 23:44:57.560842       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 23:44:57.587941       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 23:44:57.588409       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-996806_284ab634-965b-4f8a-a9ea-090d64537dd2!
	I1209 23:44:57.588574       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc92c7db-80c0-417d-9729-5ee4c8d635d9", APIVersion:"v1", ResourceVersion:"376", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-996806_284ab634-965b-4f8a-a9ea-090d64537dd2 became leader
	I1209 23:44:57.689295       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-996806_284ab634-965b-4f8a-a9ea-090d64537dd2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-996806 -n kubernetes-upgrade-996806
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-996806 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-996806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-996806
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-996806: (1.212638497s)
--- FAIL: TestKubernetesUpgrade (525.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (278.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-720064 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-720064 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m38.170288666s)

                                                
                                                
-- stdout --
	* [old-k8s-version-720064] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-720064" primary control-plane node in "old-k8s-version-720064" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:48:57.375146   77239 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:48:57.375261   77239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:48:57.375270   77239 out.go:358] Setting ErrFile to fd 2...
	I1209 23:48:57.375275   77239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:48:57.375461   77239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 23:48:57.376043   77239 out.go:352] Setting JSON to false
	I1209 23:48:57.377095   77239 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9088,"bootTime":1733779049,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:48:57.377192   77239 start.go:139] virtualization: kvm guest
	I1209 23:48:57.379417   77239 out.go:177] * [old-k8s-version-720064] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:48:57.381084   77239 notify.go:220] Checking for updates...
	I1209 23:48:57.381753   77239 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:48:57.383445   77239 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:48:57.384768   77239 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:48:57.386227   77239 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 23:48:57.387445   77239 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:48:57.388674   77239 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:48:57.390569   77239 config.go:182] Loaded profile config "bridge-030585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:48:57.390656   77239 config.go:182] Loaded profile config "enable-default-cni-030585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:48:57.390731   77239 config.go:182] Loaded profile config "flannel-030585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:48:57.390811   77239 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:48:57.437576   77239 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 23:48:57.439024   77239 start.go:297] selected driver: kvm2
	I1209 23:48:57.439047   77239 start.go:901] validating driver "kvm2" against <nil>
	I1209 23:48:57.439062   77239 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:48:57.440768   77239 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:48:57.440897   77239 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 23:48:57.457687   77239 install.go:137] /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1209 23:48:57.457756   77239 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:48:57.458073   77239 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:48:57.458112   77239 cni.go:84] Creating CNI manager for ""
	I1209 23:48:57.458151   77239 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:48:57.458163   77239 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 23:48:57.458218   77239 start.go:340] cluster config:
	{Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:48:57.458350   77239 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:48:57.460272   77239 out.go:177] * Starting "old-k8s-version-720064" primary control-plane node in "old-k8s-version-720064" cluster
	I1209 23:48:57.461477   77239 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:48:57.461523   77239 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1209 23:48:57.461535   77239 cache.go:56] Caching tarball of preloaded images
	I1209 23:48:57.461625   77239 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 23:48:57.461640   77239 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1209 23:48:57.461752   77239 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/config.json ...
	I1209 23:48:57.461775   77239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/config.json: {Name:mk8255f65c179de1e3f80b64adf9ec73ef85da98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:48:57.461926   77239 start.go:360] acquireMachinesLock for old-k8s-version-720064: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:49:07.104198   77239 start.go:364] duration metric: took 9.642242248s to acquireMachinesLock for "old-k8s-version-720064"
	I1209 23:49:07.104281   77239 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:49:07.104381   77239 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 23:49:07.106747   77239 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 23:49:07.106925   77239 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:49:07.106960   77239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:49:07.127845   77239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32809
	I1209 23:49:07.128353   77239 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:49:07.129043   77239 main.go:141] libmachine: Using API Version  1
	I1209 23:49:07.129067   77239 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:49:07.129558   77239 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:49:07.132074   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:49:07.132281   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:49:07.132477   77239 start.go:159] libmachine.API.Create for "old-k8s-version-720064" (driver="kvm2")
	I1209 23:49:07.132505   77239 client.go:168] LocalClient.Create starting
	I1209 23:49:07.132546   77239 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem
	I1209 23:49:07.132595   77239 main.go:141] libmachine: Decoding PEM data...
	I1209 23:49:07.132613   77239 main.go:141] libmachine: Parsing certificate...
	I1209 23:49:07.132678   77239 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem
	I1209 23:49:07.132709   77239 main.go:141] libmachine: Decoding PEM data...
	I1209 23:49:07.132727   77239 main.go:141] libmachine: Parsing certificate...
	I1209 23:49:07.132758   77239 main.go:141] libmachine: Running pre-create checks...
	I1209 23:49:07.132774   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .PreCreateCheck
	I1209 23:49:07.133130   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetConfigRaw
	I1209 23:49:07.133700   77239 main.go:141] libmachine: Creating machine...
	I1209 23:49:07.133716   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .Create
	I1209 23:49:07.133868   77239 main.go:141] libmachine: (old-k8s-version-720064) Creating KVM machine...
	I1209 23:49:07.135275   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | found existing default KVM network
	I1209 23:49:07.137093   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:49:07.136808   78517 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002049e0}
	I1209 23:49:07.137113   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | created network xml: 
	I1209 23:49:07.137126   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | <network>
	I1209 23:49:07.137140   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG |   <name>mk-old-k8s-version-720064</name>
	I1209 23:49:07.137150   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG |   <dns enable='no'/>
	I1209 23:49:07.137161   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG |   
	I1209 23:49:07.137170   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1209 23:49:07.137181   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG |     <dhcp>
	I1209 23:49:07.137192   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1209 23:49:07.137199   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG |     </dhcp>
	I1209 23:49:07.137209   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG |   </ip>
	I1209 23:49:07.137221   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG |   
	I1209 23:49:07.137231   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | </network>
	I1209 23:49:07.137244   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | 
	I1209 23:49:07.143340   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | trying to create private KVM network mk-old-k8s-version-720064 192.168.39.0/24...
	I1209 23:49:07.221300   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | private KVM network mk-old-k8s-version-720064 192.168.39.0/24 created
	I1209 23:49:07.221343   77239 main.go:141] libmachine: (old-k8s-version-720064) Setting up store path in /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064 ...
	I1209 23:49:07.221365   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:49:07.221281   78517 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 23:49:07.221373   77239 main.go:141] libmachine: (old-k8s-version-720064) Building disk image from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 23:49:07.221443   77239 main.go:141] libmachine: (old-k8s-version-720064) Downloading /home/jenkins/minikube-integration/19888-18950/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 23:49:07.489216   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:49:07.489026   78517 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa...
	I1209 23:49:07.710213   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:49:07.710057   78517 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/old-k8s-version-720064.rawdisk...
	I1209 23:49:07.710255   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | Writing magic tar header
	I1209 23:49:07.710281   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | Writing SSH key tar header
	I1209 23:49:07.710299   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:49:07.710212   78517 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064 ...
	I1209 23:49:07.710383   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064
	I1209 23:49:07.710412   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines
	I1209 23:49:07.710428   77239 main.go:141] libmachine: (old-k8s-version-720064) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064 (perms=drwx------)
	I1209 23:49:07.710452   77239 main.go:141] libmachine: (old-k8s-version-720064) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines (perms=drwxr-xr-x)
	I1209 23:49:07.710468   77239 main.go:141] libmachine: (old-k8s-version-720064) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube (perms=drwxr-xr-x)
	I1209 23:49:07.710485   77239 main.go:141] libmachine: (old-k8s-version-720064) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950 (perms=drwxrwxr-x)
	I1209 23:49:07.710509   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 23:49:07.710522   77239 main.go:141] libmachine: (old-k8s-version-720064) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 23:49:07.710534   77239 main.go:141] libmachine: (old-k8s-version-720064) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 23:49:07.710543   77239 main.go:141] libmachine: (old-k8s-version-720064) Creating domain...
	I1209 23:49:07.710561   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950
	I1209 23:49:07.710575   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 23:49:07.710600   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | Checking permissions on dir: /home/jenkins
	I1209 23:49:07.710618   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | Checking permissions on dir: /home
	I1209 23:49:07.710646   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | Skipping /home - not owner
	I1209 23:49:07.711607   77239 main.go:141] libmachine: (old-k8s-version-720064) define libvirt domain using xml: 
	I1209 23:49:07.711629   77239 main.go:141] libmachine: (old-k8s-version-720064) <domain type='kvm'>
	I1209 23:49:07.711637   77239 main.go:141] libmachine: (old-k8s-version-720064)   <name>old-k8s-version-720064</name>
	I1209 23:49:07.711641   77239 main.go:141] libmachine: (old-k8s-version-720064)   <memory unit='MiB'>2200</memory>
	I1209 23:49:07.711647   77239 main.go:141] libmachine: (old-k8s-version-720064)   <vcpu>2</vcpu>
	I1209 23:49:07.711652   77239 main.go:141] libmachine: (old-k8s-version-720064)   <features>
	I1209 23:49:07.711683   77239 main.go:141] libmachine: (old-k8s-version-720064)     <acpi/>
	I1209 23:49:07.711705   77239 main.go:141] libmachine: (old-k8s-version-720064)     <apic/>
	I1209 23:49:07.711733   77239 main.go:141] libmachine: (old-k8s-version-720064)     <pae/>
	I1209 23:49:07.711759   77239 main.go:141] libmachine: (old-k8s-version-720064)     
	I1209 23:49:07.711773   77239 main.go:141] libmachine: (old-k8s-version-720064)   </features>
	I1209 23:49:07.711782   77239 main.go:141] libmachine: (old-k8s-version-720064)   <cpu mode='host-passthrough'>
	I1209 23:49:07.711795   77239 main.go:141] libmachine: (old-k8s-version-720064)   
	I1209 23:49:07.711806   77239 main.go:141] libmachine: (old-k8s-version-720064)   </cpu>
	I1209 23:49:07.711819   77239 main.go:141] libmachine: (old-k8s-version-720064)   <os>
	I1209 23:49:07.711830   77239 main.go:141] libmachine: (old-k8s-version-720064)     <type>hvm</type>
	I1209 23:49:07.711841   77239 main.go:141] libmachine: (old-k8s-version-720064)     <boot dev='cdrom'/>
	I1209 23:49:07.711852   77239 main.go:141] libmachine: (old-k8s-version-720064)     <boot dev='hd'/>
	I1209 23:49:07.711865   77239 main.go:141] libmachine: (old-k8s-version-720064)     <bootmenu enable='no'/>
	I1209 23:49:07.711880   77239 main.go:141] libmachine: (old-k8s-version-720064)   </os>
	I1209 23:49:07.711892   77239 main.go:141] libmachine: (old-k8s-version-720064)   <devices>
	I1209 23:49:07.711904   77239 main.go:141] libmachine: (old-k8s-version-720064)     <disk type='file' device='cdrom'>
	I1209 23:49:07.711923   77239 main.go:141] libmachine: (old-k8s-version-720064)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/boot2docker.iso'/>
	I1209 23:49:07.711935   77239 main.go:141] libmachine: (old-k8s-version-720064)       <target dev='hdc' bus='scsi'/>
	I1209 23:49:07.711947   77239 main.go:141] libmachine: (old-k8s-version-720064)       <readonly/>
	I1209 23:49:07.711963   77239 main.go:141] libmachine: (old-k8s-version-720064)     </disk>
	I1209 23:49:07.711976   77239 main.go:141] libmachine: (old-k8s-version-720064)     <disk type='file' device='disk'>
	I1209 23:49:07.711986   77239 main.go:141] libmachine: (old-k8s-version-720064)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 23:49:07.712005   77239 main.go:141] libmachine: (old-k8s-version-720064)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/old-k8s-version-720064.rawdisk'/>
	I1209 23:49:07.712017   77239 main.go:141] libmachine: (old-k8s-version-720064)       <target dev='hda' bus='virtio'/>
	I1209 23:49:07.712027   77239 main.go:141] libmachine: (old-k8s-version-720064)     </disk>
	I1209 23:49:07.712042   77239 main.go:141] libmachine: (old-k8s-version-720064)     <interface type='network'>
	I1209 23:49:07.712057   77239 main.go:141] libmachine: (old-k8s-version-720064)       <source network='mk-old-k8s-version-720064'/>
	I1209 23:49:07.712068   77239 main.go:141] libmachine: (old-k8s-version-720064)       <model type='virtio'/>
	I1209 23:49:07.712081   77239 main.go:141] libmachine: (old-k8s-version-720064)     </interface>
	I1209 23:49:07.712092   77239 main.go:141] libmachine: (old-k8s-version-720064)     <interface type='network'>
	I1209 23:49:07.712106   77239 main.go:141] libmachine: (old-k8s-version-720064)       <source network='default'/>
	I1209 23:49:07.712121   77239 main.go:141] libmachine: (old-k8s-version-720064)       <model type='virtio'/>
	I1209 23:49:07.712134   77239 main.go:141] libmachine: (old-k8s-version-720064)     </interface>
	I1209 23:49:07.712145   77239 main.go:141] libmachine: (old-k8s-version-720064)     <serial type='pty'>
	I1209 23:49:07.712156   77239 main.go:141] libmachine: (old-k8s-version-720064)       <target port='0'/>
	I1209 23:49:07.712165   77239 main.go:141] libmachine: (old-k8s-version-720064)     </serial>
	I1209 23:49:07.712178   77239 main.go:141] libmachine: (old-k8s-version-720064)     <console type='pty'>
	I1209 23:49:07.712194   77239 main.go:141] libmachine: (old-k8s-version-720064)       <target type='serial' port='0'/>
	I1209 23:49:07.712213   77239 main.go:141] libmachine: (old-k8s-version-720064)     </console>
	I1209 23:49:07.712225   77239 main.go:141] libmachine: (old-k8s-version-720064)     <rng model='virtio'>
	I1209 23:49:07.712237   77239 main.go:141] libmachine: (old-k8s-version-720064)       <backend model='random'>/dev/random</backend>
	I1209 23:49:07.712247   77239 main.go:141] libmachine: (old-k8s-version-720064)     </rng>
	I1209 23:49:07.712258   77239 main.go:141] libmachine: (old-k8s-version-720064)     
	I1209 23:49:07.712272   77239 main.go:141] libmachine: (old-k8s-version-720064)     
	I1209 23:49:07.712282   77239 main.go:141] libmachine: (old-k8s-version-720064)   </devices>
	I1209 23:49:07.712292   77239 main.go:141] libmachine: (old-k8s-version-720064) </domain>
	I1209 23:49:07.712307   77239 main.go:141] libmachine: (old-k8s-version-720064) 
	I1209 23:49:07.716362   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:70:cf:44 in network default
	I1209 23:49:07.717162   77239 main.go:141] libmachine: (old-k8s-version-720064) Ensuring networks are active...
	I1209 23:49:07.717188   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:07.718101   77239 main.go:141] libmachine: (old-k8s-version-720064) Ensuring network default is active
	I1209 23:49:07.718540   77239 main.go:141] libmachine: (old-k8s-version-720064) Ensuring network mk-old-k8s-version-720064 is active
	I1209 23:49:07.719342   77239 main.go:141] libmachine: (old-k8s-version-720064) Getting domain xml...
	I1209 23:49:07.720223   77239 main.go:141] libmachine: (old-k8s-version-720064) Creating domain...
	I1209 23:49:09.202912   77239 main.go:141] libmachine: (old-k8s-version-720064) Waiting to get IP...
	I1209 23:49:09.203879   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:09.204390   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:49:09.204416   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:49:09.204367   78517 retry.go:31] will retry after 269.917232ms: waiting for machine to come up
	I1209 23:49:09.476175   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:09.476766   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:49:09.476792   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:49:09.476736   78517 retry.go:31] will retry after 346.818055ms: waiting for machine to come up
	I1209 23:49:09.825200   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:09.825843   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:49:09.825869   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:49:09.825815   78517 retry.go:31] will retry after 354.505909ms: waiting for machine to come up
	I1209 23:49:10.182559   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:10.183068   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:49:10.183108   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:49:10.183051   78517 retry.go:31] will retry after 506.283729ms: waiting for machine to come up
	I1209 23:49:10.690774   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:10.691419   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:49:10.691448   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:49:10.691368   78517 retry.go:31] will retry after 715.804505ms: waiting for machine to come up
	I1209 23:49:11.408504   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:11.409077   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:49:11.409115   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:49:11.408991   78517 retry.go:31] will retry after 583.585765ms: waiting for machine to come up
	I1209 23:49:11.993841   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:11.994313   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:49:11.994353   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:49:11.994269   78517 retry.go:31] will retry after 1.005380248s: waiting for machine to come up
	I1209 23:49:13.001925   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:13.002390   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:49:13.002411   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:49:13.002338   78517 retry.go:31] will retry after 906.065305ms: waiting for machine to come up
	I1209 23:49:13.910174   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:13.910780   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:49:13.910810   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:49:13.910725   78517 retry.go:31] will retry after 1.711729438s: waiting for machine to come up
	I1209 23:49:15.624732   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:15.625277   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:49:15.625302   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:49:15.625241   78517 retry.go:31] will retry after 1.53861821s: waiting for machine to come up
	I1209 23:49:17.165631   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:17.166149   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:49:17.166194   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:49:17.166115   78517 retry.go:31] will retry after 2.406150909s: waiting for machine to come up
	I1209 23:49:19.575859   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:19.576450   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:49:19.576473   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:49:19.576421   78517 retry.go:31] will retry after 2.56146412s: waiting for machine to come up
	I1209 23:49:22.138939   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:22.139350   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:49:22.139373   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:49:22.139292   78517 retry.go:31] will retry after 4.078486503s: waiting for machine to come up
	I1209 23:49:26.220676   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:26.221098   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:49:26.221132   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:49:26.221062   78517 retry.go:31] will retry after 3.435778667s: waiting for machine to come up
	I1209 23:49:29.659725   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:29.660430   77239 main.go:141] libmachine: (old-k8s-version-720064) Found IP for machine: 192.168.39.188
	I1209 23:49:29.660449   77239 main.go:141] libmachine: (old-k8s-version-720064) Reserving static IP address...
	I1209 23:49:29.660472   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has current primary IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:29.660795   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-720064", mac: "52:54:00:a1:91:00", ip: "192.168.39.188"} in network mk-old-k8s-version-720064
	I1209 23:49:29.899818   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | Getting to WaitForSSH function...
	I1209 23:49:29.899848   77239 main.go:141] libmachine: (old-k8s-version-720064) Reserved static IP address: 192.168.39.188
	I1209 23:49:29.899860   77239 main.go:141] libmachine: (old-k8s-version-720064) Waiting for SSH to be available...
	I1209 23:49:29.902623   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:29.903161   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:49:22 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a1:91:00}
	I1209 23:49:29.903192   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:29.903370   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | Using SSH client type: external
	I1209 23:49:29.903399   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa (-rw-------)
	I1209 23:49:29.903434   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:49:29.903448   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | About to run SSH command:
	I1209 23:49:29.903462   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | exit 0
	I1209 23:49:30.040506   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | SSH cmd err, output: <nil>: 
	I1209 23:49:30.040998   77239 main.go:141] libmachine: (old-k8s-version-720064) KVM machine creation complete!
	I1209 23:49:30.041268   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetConfigRaw
	I1209 23:49:30.041912   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:49:30.042137   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:49:30.042307   77239 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 23:49:30.042327   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetState
	I1209 23:49:30.043751   77239 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 23:49:30.043765   77239 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 23:49:30.043772   77239 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 23:49:30.043781   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:49:30.046727   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:30.047127   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:49:22 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:49:30.047160   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:30.047365   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:49:30.047598   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:49:30.047785   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:49:30.047973   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:49:30.048170   77239 main.go:141] libmachine: Using SSH client type: native
	I1209 23:49:30.048415   77239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:49:30.048434   77239 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 23:49:30.163627   77239 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:49:30.163668   77239 main.go:141] libmachine: Detecting the provisioner...
	I1209 23:49:30.163680   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:49:30.166989   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:30.167465   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:49:22 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:49:30.167505   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:30.167654   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:49:30.167912   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:49:30.168075   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:49:30.168247   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:49:30.168426   77239 main.go:141] libmachine: Using SSH client type: native
	I1209 23:49:30.168592   77239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:49:30.168602   77239 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 23:49:30.289512   77239 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 23:49:30.289618   77239 main.go:141] libmachine: found compatible host: buildroot
	I1209 23:49:30.289632   77239 main.go:141] libmachine: Provisioning with buildroot...
	I1209 23:49:30.289640   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:49:30.289867   77239 buildroot.go:166] provisioning hostname "old-k8s-version-720064"
	I1209 23:49:30.289900   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:49:30.290090   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:49:30.292828   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:30.293210   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:49:22 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:49:30.293239   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:30.293437   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:49:30.293635   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:49:30.293794   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:49:30.293943   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:49:30.294123   77239 main.go:141] libmachine: Using SSH client type: native
	I1209 23:49:30.294343   77239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:49:30.294432   77239 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-720064 && echo "old-k8s-version-720064" | sudo tee /etc/hostname
	I1209 23:49:30.428623   77239 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-720064
	
	I1209 23:49:30.428660   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:49:30.431358   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:30.431752   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:49:22 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:49:30.431781   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:30.431985   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:49:30.432173   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:49:30.432396   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:49:30.432527   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:49:30.432712   77239 main.go:141] libmachine: Using SSH client type: native
	I1209 23:49:30.432920   77239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:49:30.432945   77239 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-720064' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-720064/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-720064' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:49:30.556220   77239 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:49:30.556250   77239 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:49:30.556276   77239 buildroot.go:174] setting up certificates
	I1209 23:49:30.556290   77239 provision.go:84] configureAuth start
	I1209 23:49:30.556303   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:49:30.556646   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:49:30.559409   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:30.559751   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:49:22 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:49:30.559784   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:30.559925   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:49:30.562115   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:30.562443   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:49:22 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:49:30.562503   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:30.562640   77239 provision.go:143] copyHostCerts
	I1209 23:49:30.562692   77239 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:49:30.562711   77239 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:49:30.562777   77239 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:49:30.562904   77239 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:49:30.562915   77239 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:49:30.562951   77239 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:49:30.563046   77239 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:49:30.563063   77239 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:49:30.563095   77239 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:49:30.563190   77239 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-720064 san=[127.0.0.1 192.168.39.188 localhost minikube old-k8s-version-720064]
	I1209 23:49:30.670332   77239 provision.go:177] copyRemoteCerts
	I1209 23:49:30.670419   77239 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:49:30.670448   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:49:30.673728   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:30.674117   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:49:22 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:49:30.674149   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:30.674342   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:49:30.674563   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:49:30.674725   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:49:30.674866   77239 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:49:30.765316   77239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:49:30.796863   77239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 23:49:30.821584   77239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:49:30.847315   77239 provision.go:87] duration metric: took 290.973717ms to configureAuth
	I1209 23:49:30.847359   77239 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:49:30.847515   77239 config.go:182] Loaded profile config "old-k8s-version-720064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 23:49:30.847613   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:49:30.850374   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:30.850766   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:49:22 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:49:30.850794   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:30.850964   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:49:30.851208   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:49:30.851389   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:49:30.851554   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:49:30.851742   77239 main.go:141] libmachine: Using SSH client type: native
	I1209 23:49:30.851954   77239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:49:30.851971   77239 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:49:31.074522   77239 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:49:31.074553   77239 main.go:141] libmachine: Checking connection to Docker...
	I1209 23:49:31.074565   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetURL
	I1209 23:49:31.075981   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | Using libvirt version 6000000
	I1209 23:49:31.078688   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:31.079421   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:49:22 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:49:31.079452   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:31.079644   77239 main.go:141] libmachine: Docker is up and running!
	I1209 23:49:31.079657   77239 main.go:141] libmachine: Reticulating splines...
	I1209 23:49:31.079665   77239 client.go:171] duration metric: took 23.947152446s to LocalClient.Create
	I1209 23:49:31.079688   77239 start.go:167] duration metric: took 23.947214632s to libmachine.API.Create "old-k8s-version-720064"
	I1209 23:49:31.079703   77239 start.go:293] postStartSetup for "old-k8s-version-720064" (driver="kvm2")
	I1209 23:49:31.079715   77239 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:49:31.079738   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:49:31.079959   77239 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:49:31.079982   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:49:31.082360   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:31.082677   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:49:22 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:49:31.082702   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:31.082835   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:49:31.083035   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:49:31.083233   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:49:31.083409   77239 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:49:31.170436   77239 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:49:31.174616   77239 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:49:31.174644   77239 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:49:31.174723   77239 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:49:31.174819   77239 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:49:31.174910   77239 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:49:31.184315   77239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:49:31.207225   77239 start.go:296] duration metric: took 127.502301ms for postStartSetup
	I1209 23:49:31.207285   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetConfigRaw
	I1209 23:49:31.207944   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:49:31.210535   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:31.210889   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:49:22 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:49:31.210920   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:31.211129   77239 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/config.json ...
	I1209 23:49:31.211321   77239 start.go:128] duration metric: took 24.106879747s to createHost
	I1209 23:49:31.211349   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:49:31.213573   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:31.213824   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:49:22 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:49:31.213847   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:31.214019   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:49:31.214196   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:49:31.214356   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:49:31.214490   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:49:31.214643   77239 main.go:141] libmachine: Using SSH client type: native
	I1209 23:49:31.214813   77239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:49:31.214839   77239 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:49:31.328325   77239 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788171.300852320
	
	I1209 23:49:31.328355   77239 fix.go:216] guest clock: 1733788171.300852320
	I1209 23:49:31.328365   77239 fix.go:229] Guest: 2024-12-09 23:49:31.30085232 +0000 UTC Remote: 2024-12-09 23:49:31.211332436 +0000 UTC m=+33.875657021 (delta=89.519884ms)
	I1209 23:49:31.328388   77239 fix.go:200] guest clock delta is within tolerance: 89.519884ms
	I1209 23:49:31.328396   77239 start.go:83] releasing machines lock for "old-k8s-version-720064", held for 24.224148601s
	I1209 23:49:31.328433   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:49:31.328757   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:49:31.331641   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:31.332044   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:49:22 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:49:31.332071   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:31.333543   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:49:31.334094   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:49:31.334262   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:49:31.334349   77239 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:49:31.334391   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:49:31.334508   77239 ssh_runner.go:195] Run: cat /version.json
	I1209 23:49:31.334529   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:49:31.337428   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:31.337594   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:31.337877   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:49:22 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:49:31.337902   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:31.338012   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:49:22 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:49:31.338035   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:31.338038   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:49:31.338191   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:49:31.338254   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:49:31.338382   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:49:31.338443   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:49:31.338514   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:49:31.338581   77239 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:49:31.338641   77239 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:49:31.443783   77239 ssh_runner.go:195] Run: systemctl --version
	I1209 23:49:31.450273   77239 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:49:31.618983   77239 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:49:31.627412   77239 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:49:31.627496   77239 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:49:31.643976   77239 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:49:31.644001   77239 start.go:495] detecting cgroup driver to use...
	I1209 23:49:31.644072   77239 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:49:31.661035   77239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:49:31.675600   77239 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:49:31.675660   77239 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:49:31.690733   77239 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:49:31.704585   77239 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:49:31.825110   77239 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:49:31.991159   77239 docker.go:233] disabling docker service ...
	I1209 23:49:31.991231   77239 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:49:32.008056   77239 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:49:32.021444   77239 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:49:32.136498   77239 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:49:32.251655   77239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:49:32.266845   77239 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:49:32.286849   77239 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1209 23:49:32.286908   77239 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:49:32.297996   77239 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:49:32.298066   77239 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:49:32.308602   77239 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:49:32.319189   77239 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:49:32.332799   77239 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:49:32.344055   77239 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:49:32.356046   77239 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:49:32.356103   77239 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:49:32.373439   77239 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:49:32.383957   77239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:49:32.506650   77239 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:49:32.607128   77239 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:49:32.607202   77239 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:49:32.613160   77239 start.go:563] Will wait 60s for crictl version
	I1209 23:49:32.613292   77239 ssh_runner.go:195] Run: which crictl
	I1209 23:49:32.617577   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:49:32.658525   77239 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:49:32.658616   77239 ssh_runner.go:195] Run: crio --version
	I1209 23:49:32.686888   77239 ssh_runner.go:195] Run: crio --version
	I1209 23:49:32.717765   77239 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1209 23:49:32.719094   77239 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:49:32.722346   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:32.722743   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:49:22 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:49:32.722785   77239 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:49:32.723017   77239 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 23:49:32.729306   77239 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:49:32.748829   77239 kubeadm.go:883] updating cluster {Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:49:32.748936   77239 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:49:32.748982   77239 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:49:32.789373   77239 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 23:49:32.789442   77239 ssh_runner.go:195] Run: which lz4
	I1209 23:49:32.793679   77239 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:49:32.798398   77239 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:49:32.798438   77239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1209 23:49:34.321896   77239 crio.go:462] duration metric: took 1.528256737s to copy over tarball
	I1209 23:49:34.321980   77239 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:49:37.030895   77239 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.708864421s)
	I1209 23:49:37.030929   77239 crio.go:469] duration metric: took 2.709007801s to extract the tarball
	I1209 23:49:37.030939   77239 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:49:37.087710   77239 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:49:37.136724   77239 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 23:49:37.136750   77239 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 23:49:37.136881   77239 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:49:37.136908   77239 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:49:37.136923   77239 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:49:37.136929   77239 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1209 23:49:37.136890   77239 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:49:37.136911   77239 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:49:37.136895   77239 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1209 23:49:37.136911   77239 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:49:37.138367   77239 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:49:37.138512   77239 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:49:37.138527   77239 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:49:37.138539   77239 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:49:37.138617   77239 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:49:37.138624   77239 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:49:37.138694   77239 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1209 23:49:37.138695   77239 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1209 23:49:37.294191   77239 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:49:37.298220   77239 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1209 23:49:37.304145   77239 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:49:37.310379   77239 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:49:37.313055   77239 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1209 23:49:37.328838   77239 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:49:37.336783   77239 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1209 23:49:37.390169   77239 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1209 23:49:37.390244   77239 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:49:37.390293   77239 ssh_runner.go:195] Run: which crictl
	I1209 23:49:37.414601   77239 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1209 23:49:37.414648   77239 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:49:37.414697   77239 ssh_runner.go:195] Run: which crictl
	I1209 23:49:37.459158   77239 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1209 23:49:37.459210   77239 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:49:37.459266   77239 ssh_runner.go:195] Run: which crictl
	I1209 23:49:37.459328   77239 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1209 23:49:37.459366   77239 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:49:37.459412   77239 ssh_runner.go:195] Run: which crictl
	I1209 23:49:37.479485   77239 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1209 23:49:37.479527   77239 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1209 23:49:37.479531   77239 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1209 23:49:37.479577   77239 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:49:37.479600   77239 ssh_runner.go:195] Run: which crictl
	I1209 23:49:37.479621   77239 ssh_runner.go:195] Run: which crictl
	I1209 23:49:37.491719   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:49:37.491737   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:49:37.491806   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:49:37.491867   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:49:37.491888   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:49:37.491911   77239 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1209 23:49:37.491929   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:49:37.491949   77239 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1209 23:49:37.491982   77239 ssh_runner.go:195] Run: which crictl
	I1209 23:49:37.645142   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:49:37.649915   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:49:37.650048   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:49:37.650174   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:49:37.650222   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:49:37.650267   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:49:37.744861   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:49:37.760604   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:49:37.760634   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:49:37.805656   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:49:37.805708   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:49:37.805754   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:49:37.805825   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:49:37.860817   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:49:37.879131   77239 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:49:37.879135   77239 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1209 23:49:37.955245   77239 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1209 23:49:37.955280   77239 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1209 23:49:37.955345   77239 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1209 23:49:37.955367   77239 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1209 23:49:37.983224   77239 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1209 23:49:37.983435   77239 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1209 23:49:38.100697   77239 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:49:38.238410   77239 cache_images.go:92] duration metric: took 1.101585183s to LoadCachedImages
	W1209 23:49:38.238530   77239 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I1209 23:49:38.238549   77239 kubeadm.go:934] updating node { 192.168.39.188 8443 v1.20.0 crio true true} ...
	I1209 23:49:38.238669   77239 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-720064 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.188
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:49:38.238749   77239 ssh_runner.go:195] Run: crio config
	I1209 23:49:38.285269   77239 cni.go:84] Creating CNI manager for ""
	I1209 23:49:38.285296   77239 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:49:38.285310   77239 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:49:38.285336   77239 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.188 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-720064 NodeName:old-k8s-version-720064 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.188"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.188 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 23:49:38.285544   77239 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.188
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-720064"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.188
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.188"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:49:38.285621   77239 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 23:49:38.295750   77239 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:49:38.295831   77239 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:49:38.305253   77239 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1209 23:49:38.322243   77239 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:49:38.338650   77239 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1209 23:49:38.355135   77239 ssh_runner.go:195] Run: grep 192.168.39.188	control-plane.minikube.internal$ /etc/hosts
	I1209 23:49:38.358924   77239 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.188	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:49:38.370974   77239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:49:38.520038   77239 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:49:38.537653   77239 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064 for IP: 192.168.39.188
	I1209 23:49:38.537682   77239 certs.go:194] generating shared ca certs ...
	I1209 23:49:38.537702   77239 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:49:38.537880   77239 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:49:38.537933   77239 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:49:38.537946   77239 certs.go:256] generating profile certs ...
	I1209 23:49:38.538016   77239 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/client.key
	I1209 23:49:38.538038   77239 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/client.crt with IP's: []
	I1209 23:49:38.792305   77239 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/client.crt ...
	I1209 23:49:38.792333   77239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/client.crt: {Name:mka62baf12ab2dddefeb80f2c022962950fe5694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:49:38.792496   77239 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/client.key ...
	I1209 23:49:38.792511   77239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/client.key: {Name:mkaa41673f36bb294890999511d23b0e4b1bbf76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:49:38.792587   77239 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key.abcbe3fa
	I1209 23:49:38.792603   77239 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.crt.abcbe3fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.188]
	I1209 23:49:38.859846   77239 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.crt.abcbe3fa ...
	I1209 23:49:38.859873   77239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.crt.abcbe3fa: {Name:mk66d8422edb255b2f18e63b94d0a6687cc52282 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:49:38.860031   77239 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key.abcbe3fa ...
	I1209 23:49:38.860044   77239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key.abcbe3fa: {Name:mk36359fc13890c6317e7f6c70481bbf1dea51b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:49:38.860113   77239 certs.go:381] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.crt.abcbe3fa -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.crt
	I1209 23:49:38.860180   77239 certs.go:385] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key.abcbe3fa -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key
	I1209 23:49:38.860230   77239 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.key
	I1209 23:49:38.860245   77239 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.crt with IP's: []
	I1209 23:49:38.944597   77239 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.crt ...
	I1209 23:49:38.944624   77239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.crt: {Name:mkba9cf3b92fe99bead886908d87cdf0431368dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:49:38.948658   77239 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.key ...
	I1209 23:49:38.948683   77239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.key: {Name:mk7b3bbfa2337101e26a9d4f48175d1d2f46c3d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:49:38.948854   77239 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:49:38.948902   77239 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:49:38.948912   77239 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:49:38.948944   77239 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:49:38.948982   77239 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:49:38.949023   77239 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:49:38.949090   77239 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:49:38.950366   77239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:49:38.979707   77239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:49:39.008373   77239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:49:39.034108   77239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:49:39.061075   77239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 23:49:39.086747   77239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 23:49:39.112337   77239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:49:39.146369   77239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:49:39.172999   77239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:49:39.201290   77239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:49:39.225710   77239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:49:39.249341   77239 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:49:39.266392   77239 ssh_runner.go:195] Run: openssl version
	I1209 23:49:39.272999   77239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:49:39.283901   77239 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:49:39.288557   77239 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:49:39.288618   77239 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:49:39.294326   77239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:49:39.304763   77239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:49:39.315481   77239 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:49:39.320230   77239 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:49:39.320287   77239 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:49:39.326562   77239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:49:39.339665   77239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:49:39.350920   77239 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:49:39.359620   77239 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:49:39.359693   77239 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:49:39.368001   77239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:49:39.390105   77239 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:49:39.395400   77239 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 23:49:39.395460   77239 kubeadm.go:392] StartCluster: {Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:49:39.395546   77239 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:49:39.395652   77239 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:49:39.444969   77239 cri.go:89] found id: ""
	I1209 23:49:39.445053   77239 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:49:39.455350   77239 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:49:39.465370   77239 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:49:39.474650   77239 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:49:39.474673   77239 kubeadm.go:157] found existing configuration files:
	
	I1209 23:49:39.474717   77239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:49:39.483447   77239 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:49:39.483513   77239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:49:39.494088   77239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:49:39.502931   77239 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:49:39.502992   77239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:49:39.513461   77239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:49:39.522894   77239 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:49:39.522960   77239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:49:39.532510   77239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:49:39.541722   77239 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:49:39.541787   77239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:49:39.550946   77239 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 23:49:39.663996   77239 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 23:49:39.664091   77239 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 23:49:39.807230   77239 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 23:49:39.807374   77239 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 23:49:39.807516   77239 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 23:49:39.984522   77239 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 23:49:40.200832   77239 out.go:235]   - Generating certificates and keys ...
	I1209 23:49:40.200999   77239 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 23:49:40.201092   77239 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 23:49:40.201204   77239 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 23:49:40.327155   77239 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 23:49:40.523675   77239 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 23:49:40.707391   77239 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 23:49:40.844425   77239 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 23:49:40.844782   77239 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-720064] and IPs [192.168.39.188 127.0.0.1 ::1]
	I1209 23:49:41.070622   77239 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 23:49:41.070887   77239 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-720064] and IPs [192.168.39.188 127.0.0.1 ::1]
	I1209 23:49:41.305514   77239 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 23:49:41.453932   77239 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 23:49:41.692379   77239 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 23:49:41.692813   77239 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 23:49:42.202022   77239 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 23:49:42.529548   77239 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 23:49:42.695066   77239 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 23:49:42.784622   77239 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 23:49:42.806250   77239 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 23:49:42.807916   77239 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 23:49:42.808128   77239 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 23:49:42.991096   77239 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 23:49:42.992788   77239 out.go:235]   - Booting up control plane ...
	I1209 23:49:42.992911   77239 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 23:49:43.001032   77239 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 23:49:43.011793   77239 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 23:49:43.013504   77239 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 23:49:43.018607   77239 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 23:50:23.010709   77239 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 23:50:23.011208   77239 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:50:23.011533   77239 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:50:28.011775   77239 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:50:28.012014   77239 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:50:38.010937   77239 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:50:38.011202   77239 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:50:58.010245   77239 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:50:58.010497   77239 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:51:38.011140   77239 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:51:38.011405   77239 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:51:38.011431   77239 kubeadm.go:310] 
	I1209 23:51:38.011495   77239 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 23:51:38.011540   77239 kubeadm.go:310] 		timed out waiting for the condition
	I1209 23:51:38.011550   77239 kubeadm.go:310] 
	I1209 23:51:38.011626   77239 kubeadm.go:310] 	This error is likely caused by:
	I1209 23:51:38.011669   77239 kubeadm.go:310] 		- The kubelet is not running
	I1209 23:51:38.011816   77239 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 23:51:38.011827   77239 kubeadm.go:310] 
	I1209 23:51:38.011966   77239 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 23:51:38.012008   77239 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 23:51:38.012061   77239 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 23:51:38.012081   77239 kubeadm.go:310] 
	I1209 23:51:38.012249   77239 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 23:51:38.012365   77239 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 23:51:38.012377   77239 kubeadm.go:310] 
	I1209 23:51:38.012515   77239 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 23:51:38.012648   77239 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 23:51:38.012762   77239 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 23:51:38.012887   77239 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 23:51:38.012925   77239 kubeadm.go:310] 
	I1209 23:51:38.013069   77239 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 23:51:38.013193   77239 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 23:51:38.013379   77239 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1209 23:51:38.013407   77239 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-720064] and IPs [192.168.39.188 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-720064] and IPs [192.168.39.188 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-720064] and IPs [192.168.39.188 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-720064] and IPs [192.168.39.188 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 23:51:38.013461   77239 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 23:51:38.449395   77239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 23:51:38.463000   77239 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:51:38.474225   77239 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:51:38.474244   77239 kubeadm.go:157] found existing configuration files:
	
	I1209 23:51:38.474312   77239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:51:38.483186   77239 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:51:38.483248   77239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:51:38.492383   77239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:51:38.500771   77239 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:51:38.500822   77239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:51:38.509431   77239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:51:38.517682   77239 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:51:38.517736   77239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:51:38.528286   77239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:51:38.536359   77239 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:51:38.536419   77239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:51:38.546337   77239 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 23:51:38.630466   77239 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 23:51:38.630681   77239 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 23:51:38.779461   77239 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 23:51:38.779620   77239 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 23:51:38.779735   77239 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 23:51:38.955830   77239 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 23:51:38.958442   77239 out.go:235]   - Generating certificates and keys ...
	I1209 23:51:38.958595   77239 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 23:51:38.958730   77239 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 23:51:38.958960   77239 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 23:51:38.959299   77239 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 23:51:38.959423   77239 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 23:51:38.959502   77239 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 23:51:38.959614   77239 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 23:51:38.959711   77239 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 23:51:38.959854   77239 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 23:51:38.960017   77239 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 23:51:38.960101   77239 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 23:51:38.960212   77239 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 23:51:39.085415   77239 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 23:51:39.393278   77239 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 23:51:39.462708   77239 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 23:51:39.745275   77239 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 23:51:39.764348   77239 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 23:51:39.765439   77239 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 23:51:39.765491   77239 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 23:51:39.910324   77239 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 23:51:39.912162   77239 out.go:235]   - Booting up control plane ...
	I1209 23:51:39.912272   77239 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 23:51:39.923863   77239 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 23:51:39.925304   77239 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 23:51:39.926242   77239 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 23:51:39.930848   77239 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 23:52:19.934881   77239 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 23:52:19.934965   77239 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:52:19.935146   77239 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:52:24.935439   77239 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:52:24.935681   77239 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:52:34.936351   77239 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:52:34.936643   77239 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:52:54.936184   77239 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:52:54.936445   77239 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:53:34.936683   77239 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 23:53:34.936885   77239 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 23:53:34.936896   77239 kubeadm.go:310] 
	I1209 23:53:34.936993   77239 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 23:53:34.937064   77239 kubeadm.go:310] 		timed out waiting for the condition
	I1209 23:53:34.937072   77239 kubeadm.go:310] 
	I1209 23:53:34.937110   77239 kubeadm.go:310] 	This error is likely caused by:
	I1209 23:53:34.937160   77239 kubeadm.go:310] 		- The kubelet is not running
	I1209 23:53:34.937314   77239 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 23:53:34.937331   77239 kubeadm.go:310] 
	I1209 23:53:34.937456   77239 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 23:53:34.937519   77239 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 23:53:34.937578   77239 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 23:53:34.937601   77239 kubeadm.go:310] 
	I1209 23:53:34.937759   77239 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 23:53:34.937866   77239 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 23:53:34.937880   77239 kubeadm.go:310] 
	I1209 23:53:34.938035   77239 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 23:53:34.938154   77239 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 23:53:34.938257   77239 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 23:53:34.938357   77239 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 23:53:34.938369   77239 kubeadm.go:310] 
	I1209 23:53:34.938575   77239 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 23:53:34.938711   77239 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 23:53:34.938811   77239 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 23:53:34.938877   77239 kubeadm.go:394] duration metric: took 3m55.543419875s to StartCluster
	I1209 23:53:34.938925   77239 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 23:53:34.938984   77239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 23:53:34.975315   77239 cri.go:89] found id: ""
	I1209 23:53:34.975345   77239 logs.go:282] 0 containers: []
	W1209 23:53:34.975352   77239 logs.go:284] No container was found matching "kube-apiserver"
	I1209 23:53:34.975360   77239 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 23:53:34.975426   77239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 23:53:35.007860   77239 cri.go:89] found id: ""
	I1209 23:53:35.007886   77239 logs.go:282] 0 containers: []
	W1209 23:53:35.007894   77239 logs.go:284] No container was found matching "etcd"
	I1209 23:53:35.007899   77239 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 23:53:35.007949   77239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 23:53:35.041963   77239 cri.go:89] found id: ""
	I1209 23:53:35.041991   77239 logs.go:282] 0 containers: []
	W1209 23:53:35.041999   77239 logs.go:284] No container was found matching "coredns"
	I1209 23:53:35.042006   77239 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 23:53:35.042071   77239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 23:53:35.074680   77239 cri.go:89] found id: ""
	I1209 23:53:35.074709   77239 logs.go:282] 0 containers: []
	W1209 23:53:35.074718   77239 logs.go:284] No container was found matching "kube-scheduler"
	I1209 23:53:35.074724   77239 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 23:53:35.074776   77239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 23:53:35.107617   77239 cri.go:89] found id: ""
	I1209 23:53:35.107649   77239 logs.go:282] 0 containers: []
	W1209 23:53:35.107661   77239 logs.go:284] No container was found matching "kube-proxy"
	I1209 23:53:35.107669   77239 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 23:53:35.107723   77239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 23:53:35.142351   77239 cri.go:89] found id: ""
	I1209 23:53:35.142382   77239 logs.go:282] 0 containers: []
	W1209 23:53:35.142393   77239 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 23:53:35.142401   77239 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 23:53:35.142463   77239 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 23:53:35.185624   77239 cri.go:89] found id: ""
	I1209 23:53:35.185651   77239 logs.go:282] 0 containers: []
	W1209 23:53:35.185660   77239 logs.go:284] No container was found matching "kindnet"
	I1209 23:53:35.185669   77239 logs.go:123] Gathering logs for kubelet ...
	I1209 23:53:35.185696   77239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 23:53:35.237033   77239 logs.go:123] Gathering logs for dmesg ...
	I1209 23:53:35.237069   77239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 23:53:35.250420   77239 logs.go:123] Gathering logs for describe nodes ...
	I1209 23:53:35.250449   77239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 23:53:35.354491   77239 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 23:53:35.354517   77239 logs.go:123] Gathering logs for CRI-O ...
	I1209 23:53:35.354532   77239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 23:53:35.453613   77239 logs.go:123] Gathering logs for container status ...
	I1209 23:53:35.453647   77239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1209 23:53:35.489146   77239 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1209 23:53:35.489217   77239 out.go:270] * 
	* 
	W1209 23:53:35.489288   77239 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 23:53:35.489303   77239 out.go:270] * 
	* 
	W1209 23:53:35.490132   77239 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 23:53:35.493822   77239 out.go:201] 
	W1209 23:53:35.495294   77239 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 23:53:35.495355   77239 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 23:53:35.495380   77239 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 23:53:35.497034   77239 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-720064 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-720064 -n old-k8s-version-720064
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-720064 -n old-k8s-version-720064: exit status 6 (233.589648ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 23:53:35.774661   83507 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-720064" does not appear in /home/jenkins/minikube-integration/19888-18950/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-720064" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (278.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-048296 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-048296 --alsologtostderr -v=3: exit status 82 (2m0.487275774s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-048296"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:51:25.554006   82748 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:51:25.554142   82748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:51:25.554153   82748 out.go:358] Setting ErrFile to fd 2...
	I1209 23:51:25.554157   82748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:51:25.554360   82748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 23:51:25.554626   82748 out.go:352] Setting JSON to false
	I1209 23:51:25.554724   82748 mustload.go:65] Loading cluster: no-preload-048296
	I1209 23:51:25.555109   82748 config.go:182] Loaded profile config "no-preload-048296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:51:25.555189   82748 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/config.json ...
	I1209 23:51:25.555371   82748 mustload.go:65] Loading cluster: no-preload-048296
	I1209 23:51:25.555497   82748 config.go:182] Loaded profile config "no-preload-048296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:51:25.555537   82748 stop.go:39] StopHost: no-preload-048296
	I1209 23:51:25.555981   82748 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:51:25.556028   82748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:51:25.572608   82748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44863
	I1209 23:51:25.573105   82748 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:51:25.573617   82748 main.go:141] libmachine: Using API Version  1
	I1209 23:51:25.573643   82748 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:51:25.574073   82748 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:51:25.576666   82748 out.go:177] * Stopping node "no-preload-048296"  ...
	I1209 23:51:25.578076   82748 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1209 23:51:25.578109   82748 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:51:25.578392   82748 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1209 23:51:25.578426   82748 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:51:25.581174   82748 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:51:25.581529   82748 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:49:47 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:51:25.581578   82748 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:51:25.581638   82748 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:51:25.581816   82748 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:51:25.581972   82748 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:51:25.582147   82748 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:51:25.669227   82748 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1209 23:51:25.732590   82748 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1209 23:51:25.789595   82748 main.go:141] libmachine: Stopping "no-preload-048296"...
	I1209 23:51:25.789629   82748 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1209 23:51:25.791418   82748 main.go:141] libmachine: (no-preload-048296) Calling .Stop
	I1209 23:51:25.795153   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 0/120
	I1209 23:51:26.797590   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 1/120
	I1209 23:51:27.799071   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 2/120
	I1209 23:51:28.800436   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 3/120
	I1209 23:51:29.802046   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 4/120
	I1209 23:51:30.803946   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 5/120
	I1209 23:51:31.805320   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 6/120
	I1209 23:51:32.806628   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 7/120
	I1209 23:51:33.808011   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 8/120
	I1209 23:51:34.809335   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 9/120
	I1209 23:51:35.811557   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 10/120
	I1209 23:51:36.812883   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 11/120
	I1209 23:51:37.814234   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 12/120
	I1209 23:51:38.815678   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 13/120
	I1209 23:51:39.817232   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 14/120
	I1209 23:51:40.819301   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 15/120
	I1209 23:51:41.820765   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 16/120
	I1209 23:51:42.822249   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 17/120
	I1209 23:51:43.823882   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 18/120
	I1209 23:51:44.825999   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 19/120
	I1209 23:51:45.827670   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 20/120
	I1209 23:51:46.828991   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 21/120
	I1209 23:51:47.830333   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 22/120
	I1209 23:51:48.831692   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 23/120
	I1209 23:51:49.833032   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 24/120
	I1209 23:51:50.835354   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 25/120
	I1209 23:51:51.836598   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 26/120
	I1209 23:51:52.838150   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 27/120
	I1209 23:51:53.839706   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 28/120
	I1209 23:51:54.840977   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 29/120
	I1209 23:51:55.842346   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 30/120
	I1209 23:51:56.843650   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 31/120
	I1209 23:51:57.844983   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 32/120
	I1209 23:51:58.846302   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 33/120
	I1209 23:51:59.847690   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 34/120
	I1209 23:52:00.849783   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 35/120
	I1209 23:52:01.851120   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 36/120
	I1209 23:52:02.852714   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 37/120
	I1209 23:52:03.854149   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 38/120
	I1209 23:52:04.855497   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 39/120
	I1209 23:52:05.857793   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 40/120
	I1209 23:52:06.859847   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 41/120
	I1209 23:52:07.861294   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 42/120
	I1209 23:52:08.862777   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 43/120
	I1209 23:52:09.864057   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 44/120
	I1209 23:52:10.866167   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 45/120
	I1209 23:52:11.867895   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 46/120
	I1209 23:52:12.869215   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 47/120
	I1209 23:52:13.870628   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 48/120
	I1209 23:52:14.871934   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 49/120
	I1209 23:52:15.874172   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 50/120
	I1209 23:52:16.875776   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 51/120
	I1209 23:52:17.878061   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 52/120
	I1209 23:52:18.879597   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 53/120
	I1209 23:52:19.880893   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 54/120
	I1209 23:52:20.882820   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 55/120
	I1209 23:52:21.884114   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 56/120
	I1209 23:52:22.885401   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 57/120
	I1209 23:52:23.886930   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 58/120
	I1209 23:52:24.888320   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 59/120
	I1209 23:52:25.890518   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 60/120
	I1209 23:52:26.891746   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 61/120
	I1209 23:52:27.893125   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 62/120
	I1209 23:52:28.894576   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 63/120
	I1209 23:52:29.896129   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 64/120
	I1209 23:52:30.898123   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 65/120
	I1209 23:52:31.899509   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 66/120
	I1209 23:52:32.900875   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 67/120
	I1209 23:52:33.902519   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 68/120
	I1209 23:52:34.903870   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 69/120
	I1209 23:52:35.906130   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 70/120
	I1209 23:52:36.907656   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 71/120
	I1209 23:52:37.908829   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 72/120
	I1209 23:52:38.910206   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 73/120
	I1209 23:52:39.911490   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 74/120
	I1209 23:52:40.913416   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 75/120
	I1209 23:52:41.914916   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 76/120
	I1209 23:52:42.916097   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 77/120
	I1209 23:52:43.917478   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 78/120
	I1209 23:52:44.918699   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 79/120
	I1209 23:52:45.920755   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 80/120
	I1209 23:52:46.922202   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 81/120
	I1209 23:52:47.923370   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 82/120
	I1209 23:52:48.924876   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 83/120
	I1209 23:52:49.926099   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 84/120
	I1209 23:52:50.928118   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 85/120
	I1209 23:52:51.929618   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 86/120
	I1209 23:52:52.930884   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 87/120
	I1209 23:52:53.932382   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 88/120
	I1209 23:52:54.934987   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 89/120
	I1209 23:52:55.937194   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 90/120
	I1209 23:52:56.938717   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 91/120
	I1209 23:52:57.940110   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 92/120
	I1209 23:52:58.941502   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 93/120
	I1209 23:52:59.942882   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 94/120
	I1209 23:53:00.944432   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 95/120
	I1209 23:53:01.946131   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 96/120
	I1209 23:53:02.947746   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 97/120
	I1209 23:53:03.950141   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 98/120
	I1209 23:53:04.951430   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 99/120
	I1209 23:53:05.953693   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 100/120
	I1209 23:53:06.955020   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 101/120
	I1209 23:53:07.956457   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 102/120
	I1209 23:53:08.957779   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 103/120
	I1209 23:53:09.959197   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 104/120
	I1209 23:53:10.961215   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 105/120
	I1209 23:53:11.962539   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 106/120
	I1209 23:53:12.963949   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 107/120
	I1209 23:53:13.965138   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 108/120
	I1209 23:53:14.966772   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 109/120
	I1209 23:53:15.968712   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 110/120
	I1209 23:53:16.969926   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 111/120
	I1209 23:53:17.971429   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 112/120
	I1209 23:53:18.972753   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 113/120
	I1209 23:53:19.974256   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 114/120
	I1209 23:53:20.976442   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 115/120
	I1209 23:53:21.977584   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 116/120
	I1209 23:53:22.979175   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 117/120
	I1209 23:53:23.980419   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 118/120
	I1209 23:53:24.981773   82748 main.go:141] libmachine: (no-preload-048296) Waiting for machine to stop 119/120
	I1209 23:53:25.982927   82748 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1209 23:53:25.983009   82748 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1209 23:53:25.985170   82748 out.go:201] 
	W1209 23:53:25.986610   82748 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1209 23:53:25.986626   82748 out.go:270] * 
	* 
	W1209 23:53:25.989149   82748 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 23:53:25.990493   82748 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-048296 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-048296 -n no-preload-048296
E1209 23:53:28.185798   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:28.192138   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:28.203457   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:28.224891   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:28.266365   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:28.347981   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-048296 -n no-preload-048296: exit status 3 (18.511545209s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 23:53:44.503884   83412 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.182:22: connect: no route to host
	E1209 23:53:44.503908   83412 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.182:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-048296" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-825613 --alsologtostderr -v=3
E1209 23:51:29.740431   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:51:33.103278   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:51:33.109678   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:51:33.121047   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:51:33.142464   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:51:33.184636   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:51:33.266119   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:51:33.427670   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:51:33.749404   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:51:34.391529   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:51:35.673727   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:51:38.235548   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:51:43.357466   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:51:49.408254   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:51:53.599421   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:52:10.702726   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-825613 --alsologtostderr -v=3: exit status 82 (2m0.482562707s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-825613"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:51:27.927999   82844 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:51:27.928274   82844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:51:27.928285   82844 out.go:358] Setting ErrFile to fd 2...
	I1209 23:51:27.928289   82844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:51:27.928530   82844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 23:51:27.928811   82844 out.go:352] Setting JSON to false
	I1209 23:51:27.928906   82844 mustload.go:65] Loading cluster: embed-certs-825613
	I1209 23:51:27.929286   82844 config.go:182] Loaded profile config "embed-certs-825613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:51:27.929373   82844 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/config.json ...
	I1209 23:51:27.929568   82844 mustload.go:65] Loading cluster: embed-certs-825613
	I1209 23:51:27.929690   82844 config.go:182] Loaded profile config "embed-certs-825613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:51:27.929739   82844 stop.go:39] StopHost: embed-certs-825613
	I1209 23:51:27.930128   82844 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:51:27.930182   82844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:51:27.946080   82844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38685
	I1209 23:51:27.946703   82844 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:51:27.947330   82844 main.go:141] libmachine: Using API Version  1
	I1209 23:51:27.947366   82844 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:51:27.947791   82844 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:51:27.949679   82844 out.go:177] * Stopping node "embed-certs-825613"  ...
	I1209 23:51:27.951175   82844 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1209 23:51:27.951208   82844 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:51:27.951434   82844 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1209 23:51:27.951458   82844 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:51:27.954569   82844 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:51:27.954907   82844 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:50:12 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:51:27.954936   82844 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:51:27.955010   82844 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:51:27.955176   82844 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:51:27.955388   82844 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:51:27.955593   82844 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:51:28.046428   82844 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1209 23:51:28.103441   82844 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1209 23:51:28.157599   82844 main.go:141] libmachine: Stopping "embed-certs-825613"...
	I1209 23:51:28.157633   82844 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:51:28.159291   82844 main.go:141] libmachine: (embed-certs-825613) Calling .Stop
	I1209 23:51:28.163069   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 0/120
	I1209 23:51:29.164548   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 1/120
	I1209 23:51:30.166079   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 2/120
	I1209 23:51:31.167335   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 3/120
	I1209 23:51:32.169315   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 4/120
	I1209 23:51:33.171306   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 5/120
	I1209 23:51:34.172984   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 6/120
	I1209 23:51:35.174524   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 7/120
	I1209 23:51:36.175956   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 8/120
	I1209 23:51:37.177467   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 9/120
	I1209 23:51:38.179948   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 10/120
	I1209 23:51:39.182088   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 11/120
	I1209 23:51:40.183533   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 12/120
	I1209 23:51:41.184888   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 13/120
	I1209 23:51:42.186226   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 14/120
	I1209 23:51:43.188220   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 15/120
	I1209 23:51:44.189960   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 16/120
	I1209 23:51:45.191417   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 17/120
	I1209 23:51:46.192795   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 18/120
	I1209 23:51:47.194147   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 19/120
	I1209 23:51:48.196229   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 20/120
	I1209 23:51:49.197813   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 21/120
	I1209 23:51:50.199135   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 22/120
	I1209 23:51:51.201250   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 23/120
	I1209 23:51:52.202574   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 24/120
	I1209 23:51:53.204732   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 25/120
	I1209 23:51:54.206092   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 26/120
	I1209 23:51:55.207517   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 27/120
	I1209 23:51:56.209198   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 28/120
	I1209 23:51:57.210554   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 29/120
	I1209 23:51:58.212853   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 30/120
	I1209 23:51:59.214171   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 31/120
	I1209 23:52:00.216314   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 32/120
	I1209 23:52:01.217600   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 33/120
	I1209 23:52:02.218934   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 34/120
	I1209 23:52:03.220864   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 35/120
	I1209 23:52:04.222197   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 36/120
	I1209 23:52:05.223705   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 37/120
	I1209 23:52:06.225088   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 38/120
	I1209 23:52:07.226275   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 39/120
	I1209 23:52:08.228656   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 40/120
	I1209 23:52:09.229950   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 41/120
	I1209 23:52:10.231335   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 42/120
	I1209 23:52:11.232713   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 43/120
	I1209 23:52:12.233969   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 44/120
	I1209 23:52:13.235979   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 45/120
	I1209 23:52:14.237561   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 46/120
	I1209 23:52:15.239116   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 47/120
	I1209 23:52:16.240661   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 48/120
	I1209 23:52:17.241922   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 49/120
	I1209 23:52:18.244105   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 50/120
	I1209 23:52:19.245430   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 51/120
	I1209 23:52:20.246948   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 52/120
	I1209 23:52:21.248822   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 53/120
	I1209 23:52:22.250111   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 54/120
	I1209 23:52:23.252216   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 55/120
	I1209 23:52:24.253764   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 56/120
	I1209 23:52:25.254994   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 57/120
	I1209 23:52:26.256315   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 58/120
	I1209 23:52:27.258092   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 59/120
	I1209 23:52:28.260352   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 60/120
	I1209 23:52:29.261748   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 61/120
	I1209 23:52:30.263361   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 62/120
	I1209 23:52:31.264824   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 63/120
	I1209 23:52:32.266156   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 64/120
	I1209 23:52:33.268173   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 65/120
	I1209 23:52:34.269569   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 66/120
	I1209 23:52:35.271064   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 67/120
	I1209 23:52:36.272510   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 68/120
	I1209 23:52:37.273879   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 69/120
	I1209 23:52:38.276166   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 70/120
	I1209 23:52:39.277693   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 71/120
	I1209 23:52:40.279018   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 72/120
	I1209 23:52:41.280617   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 73/120
	I1209 23:52:42.282064   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 74/120
	I1209 23:52:43.284258   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 75/120
	I1209 23:52:44.285629   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 76/120
	I1209 23:52:45.287064   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 77/120
	I1209 23:52:46.288538   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 78/120
	I1209 23:52:47.289965   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 79/120
	I1209 23:52:48.292116   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 80/120
	I1209 23:52:49.293531   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 81/120
	I1209 23:52:50.294867   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 82/120
	I1209 23:52:51.296282   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 83/120
	I1209 23:52:52.297545   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 84/120
	I1209 23:52:53.299717   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 85/120
	I1209 23:52:54.301108   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 86/120
	I1209 23:52:55.302834   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 87/120
	I1209 23:52:56.304240   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 88/120
	I1209 23:52:57.305501   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 89/120
	I1209 23:52:58.307669   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 90/120
	I1209 23:52:59.309051   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 91/120
	I1209 23:53:00.310552   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 92/120
	I1209 23:53:01.312621   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 93/120
	I1209 23:53:02.313842   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 94/120
	I1209 23:53:03.315831   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 95/120
	I1209 23:53:04.317296   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 96/120
	I1209 23:53:05.318755   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 97/120
	I1209 23:53:06.320379   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 98/120
	I1209 23:53:07.321903   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 99/120
	I1209 23:53:08.324124   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 100/120
	I1209 23:53:09.325600   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 101/120
	I1209 23:53:10.327113   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 102/120
	I1209 23:53:11.328487   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 103/120
	I1209 23:53:12.330150   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 104/120
	I1209 23:53:13.332168   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 105/120
	I1209 23:53:14.333902   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 106/120
	I1209 23:53:15.335437   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 107/120
	I1209 23:53:16.337001   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 108/120
	I1209 23:53:17.338547   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 109/120
	I1209 23:53:18.340649   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 110/120
	I1209 23:53:19.342056   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 111/120
	I1209 23:53:20.343361   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 112/120
	I1209 23:53:21.344650   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 113/120
	I1209 23:53:22.346136   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 114/120
	I1209 23:53:23.348404   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 115/120
	I1209 23:53:24.350096   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 116/120
	I1209 23:53:25.351555   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 117/120
	I1209 23:53:26.352852   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 118/120
	I1209 23:53:27.354307   82844 main.go:141] libmachine: (embed-certs-825613) Waiting for machine to stop 119/120
	I1209 23:53:28.355812   82844 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1209 23:53:28.355872   82844 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1209 23:53:28.357662   82844 out.go:201] 
	W1209 23:53:28.358894   82844 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1209 23:53:28.358904   82844 out.go:270] * 
	* 
	W1209 23:53:28.361261   82844 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 23:53:28.362604   82844 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-825613 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-825613 -n embed-certs-825613
E1209 23:53:28.510272   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:28.831789   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:29.473884   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:30.756118   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:32.624711   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:33.318023   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-825613 -n embed-certs-825613: exit status 3 (18.444820055s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 23:53:46.807913   83442 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.19:22: connect: no route to host
	E1209 23:53:46.807935   83442 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.19:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-825613" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-871210 --alsologtostderr -v=3
E1209 23:52:55.042832   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:00.638937   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:00.645380   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:00.656713   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:00.678136   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:00.719500   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:00.800963   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:00.962706   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:01.284723   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:01.926094   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:03.207418   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:05.768849   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:10.891113   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:12.522025   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:21.132903   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-871210 --alsologtostderr -v=3: exit status 82 (2m0.485432368s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-871210"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:52:21.194679   83157 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:52:21.194823   83157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:52:21.194834   83157 out.go:358] Setting ErrFile to fd 2...
	I1209 23:52:21.194841   83157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:52:21.195013   83157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 23:52:21.195247   83157 out.go:352] Setting JSON to false
	I1209 23:52:21.195323   83157 mustload.go:65] Loading cluster: default-k8s-diff-port-871210
	I1209 23:52:21.195739   83157 config.go:182] Loaded profile config "default-k8s-diff-port-871210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:52:21.195810   83157 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/config.json ...
	I1209 23:52:21.195985   83157 mustload.go:65] Loading cluster: default-k8s-diff-port-871210
	I1209 23:52:21.196085   83157 config.go:182] Loaded profile config "default-k8s-diff-port-871210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:52:21.196123   83157 stop.go:39] StopHost: default-k8s-diff-port-871210
	I1209 23:52:21.196482   83157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:52:21.196528   83157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:52:21.211352   83157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35295
	I1209 23:52:21.211815   83157 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:52:21.212397   83157 main.go:141] libmachine: Using API Version  1
	I1209 23:52:21.212422   83157 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:52:21.212790   83157 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:52:21.215530   83157 out.go:177] * Stopping node "default-k8s-diff-port-871210"  ...
	I1209 23:52:21.217019   83157 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1209 23:52:21.217058   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:52:21.217286   83157 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1209 23:52:21.217351   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:52:21.220420   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:52:21.220838   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:50:53 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:52:21.220872   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:52:21.220945   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:52:21.221149   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:52:21.221293   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:52:21.221435   83157 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:52:21.305603   83157 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1209 23:52:21.369481   83157 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1209 23:52:21.432676   83157 main.go:141] libmachine: Stopping "default-k8s-diff-port-871210"...
	I1209 23:52:21.432721   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1209 23:52:21.434519   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Stop
	I1209 23:52:21.438249   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 0/120
	I1209 23:52:22.439860   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 1/120
	I1209 23:52:23.441126   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 2/120
	I1209 23:52:24.442694   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 3/120
	I1209 23:52:25.444139   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 4/120
	I1209 23:52:26.445946   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 5/120
	I1209 23:52:27.447496   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 6/120
	I1209 23:52:28.448809   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 7/120
	I1209 23:52:29.450305   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 8/120
	I1209 23:52:30.451986   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 9/120
	I1209 23:52:31.453569   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 10/120
	I1209 23:52:32.454970   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 11/120
	I1209 23:52:33.456698   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 12/120
	I1209 23:52:34.458144   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 13/120
	I1209 23:52:35.459953   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 14/120
	I1209 23:52:36.461940   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 15/120
	I1209 23:52:37.463352   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 16/120
	I1209 23:52:38.464748   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 17/120
	I1209 23:52:39.466422   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 18/120
	I1209 23:52:40.467833   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 19/120
	I1209 23:52:41.469955   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 20/120
	I1209 23:52:42.471452   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 21/120
	I1209 23:52:43.473024   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 22/120
	I1209 23:52:44.474509   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 23/120
	I1209 23:52:45.475984   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 24/120
	I1209 23:52:46.478047   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 25/120
	I1209 23:52:47.479476   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 26/120
	I1209 23:52:48.480817   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 27/120
	I1209 23:52:49.482315   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 28/120
	I1209 23:52:50.483755   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 29/120
	I1209 23:52:51.486023   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 30/120
	I1209 23:52:52.487355   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 31/120
	I1209 23:52:53.488776   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 32/120
	I1209 23:52:54.490139   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 33/120
	I1209 23:52:55.491655   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 34/120
	I1209 23:52:56.494046   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 35/120
	I1209 23:52:57.495401   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 36/120
	I1209 23:52:58.497015   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 37/120
	I1209 23:52:59.498804   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 38/120
	I1209 23:53:00.500254   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 39/120
	I1209 23:53:01.502437   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 40/120
	I1209 23:53:02.503846   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 41/120
	I1209 23:53:03.505289   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 42/120
	I1209 23:53:04.506692   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 43/120
	I1209 23:53:05.508136   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 44/120
	I1209 23:53:06.510528   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 45/120
	I1209 23:53:07.511931   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 46/120
	I1209 23:53:08.513289   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 47/120
	I1209 23:53:09.514742   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 48/120
	I1209 23:53:10.516059   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 49/120
	I1209 23:53:11.518407   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 50/120
	I1209 23:53:12.519709   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 51/120
	I1209 23:53:13.521103   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 52/120
	I1209 23:53:14.522522   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 53/120
	I1209 23:53:15.523926   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 54/120
	I1209 23:53:16.525868   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 55/120
	I1209 23:53:17.527291   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 56/120
	I1209 23:53:18.528653   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 57/120
	I1209 23:53:19.530177   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 58/120
	I1209 23:53:20.531626   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 59/120
	I1209 23:53:21.533770   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 60/120
	I1209 23:53:22.535005   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 61/120
	I1209 23:53:23.536422   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 62/120
	I1209 23:53:24.537702   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 63/120
	I1209 23:53:25.539149   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 64/120
	I1209 23:53:26.541434   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 65/120
	I1209 23:53:27.542875   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 66/120
	I1209 23:53:28.544123   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 67/120
	I1209 23:53:29.546232   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 68/120
	I1209 23:53:30.547503   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 69/120
	I1209 23:53:31.549991   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 70/120
	I1209 23:53:32.551334   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 71/120
	I1209 23:53:33.552862   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 72/120
	I1209 23:53:34.554330   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 73/120
	I1209 23:53:35.556368   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 74/120
	I1209 23:53:36.558075   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 75/120
	I1209 23:53:37.559409   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 76/120
	I1209 23:53:38.560869   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 77/120
	I1209 23:53:39.562458   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 78/120
	I1209 23:53:40.563915   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 79/120
	I1209 23:53:41.566328   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 80/120
	I1209 23:53:42.567837   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 81/120
	I1209 23:53:43.569224   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 82/120
	I1209 23:53:44.570762   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 83/120
	I1209 23:53:45.572347   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 84/120
	I1209 23:53:46.574291   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 85/120
	I1209 23:53:47.575641   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 86/120
	I1209 23:53:48.576937   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 87/120
	I1209 23:53:49.578403   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 88/120
	I1209 23:53:50.579823   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 89/120
	I1209 23:53:51.581697   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 90/120
	I1209 23:53:52.583002   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 91/120
	I1209 23:53:53.584497   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 92/120
	I1209 23:53:54.585904   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 93/120
	I1209 23:53:55.587241   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 94/120
	I1209 23:53:56.588962   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 95/120
	I1209 23:53:57.590328   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 96/120
	I1209 23:53:58.591830   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 97/120
	I1209 23:53:59.593152   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 98/120
	I1209 23:54:00.594559   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 99/120
	I1209 23:54:01.596755   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 100/120
	I1209 23:54:02.598153   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 101/120
	I1209 23:54:03.599616   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 102/120
	I1209 23:54:04.600977   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 103/120
	I1209 23:54:05.602437   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 104/120
	I1209 23:54:06.604595   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 105/120
	I1209 23:54:07.605928   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 106/120
	I1209 23:54:08.607337   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 107/120
	I1209 23:54:09.608841   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 108/120
	I1209 23:54:10.610244   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 109/120
	I1209 23:54:11.612406   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 110/120
	I1209 23:54:12.614058   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 111/120
	I1209 23:54:13.615739   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 112/120
	I1209 23:54:14.617088   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 113/120
	I1209 23:54:15.618484   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 114/120
	I1209 23:54:16.620476   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 115/120
	I1209 23:54:17.621782   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 116/120
	I1209 23:54:18.623173   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 117/120
	I1209 23:54:19.624605   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 118/120
	I1209 23:54:20.626017   83157 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for machine to stop 119/120
	I1209 23:54:21.626973   83157 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1209 23:54:21.627047   83157 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1209 23:54:21.628884   83157 out.go:201] 
	W1209 23:54:21.630107   83157 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1209 23:54:21.630123   83157 out.go:270] * 
	* 
	W1209 23:54:21.632749   83157 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 23:54:21.633903   83157 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-871210 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-871210 -n default-k8s-diff-port-871210
E1209 23:54:21.832945   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:54:22.576177   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:54:23.114378   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:54:25.675727   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:54:30.797903   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-871210 -n default-k8s-diff-port-871210: exit status 3 (18.676412061s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 23:54:40.311919   84016 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host
	E1209 23:54:40.311941   84016 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-871210" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-720064 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-720064 create -f testdata/busybox.yaml: exit status 1 (44.401432ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-720064" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-720064 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-720064 -n old-k8s-version-720064
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-720064 -n old-k8s-version-720064: exit status 6 (226.33445ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 23:53:36.046144   83547 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-720064" does not appear in /home/jenkins/minikube-integration/19888-18950/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-720064" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-720064 -n old-k8s-version-720064
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-720064 -n old-k8s-version-720064: exit status 6 (223.837803ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 23:53:36.269037   83577 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-720064" does not appear in /home/jenkins/minikube-integration/19888-18950/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-720064" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (103.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-720064 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1209 23:53:38.331690   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:38.338083   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:38.349528   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:38.370905   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:38.412370   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:38.439869   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:38.494292   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:38.656259   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:38.978064   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:39.619799   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:40.901763   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:41.614690   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:43.463178   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-720064 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m43.247522281s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-720064 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-720064 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-720064 describe deploy/metrics-server -n kube-system: exit status 1 (44.65084ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-720064" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-720064 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-720064 -n old-k8s-version-720064
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-720064 -n old-k8s-version-720064: exit status 6 (223.169193ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 23:55:19.786206   84401 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-720064" does not appear in /home/jenkins/minikube-integration/19888-18950/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-720064" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (103.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-048296 -n no-preload-048296
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-048296 -n no-preload-048296: exit status 3 (3.167696145s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 23:53:47.671926   83670 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.182:22: connect: no route to host
	E1209 23:53:47.671949   83670 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.182:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-048296 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1209 23:53:48.585231   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:53:48.681807   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-048296 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151329288s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.182:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-048296 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-048296 -n no-preload-048296
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-048296 -n no-preload-048296: exit status 3 (3.067990369s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 23:53:56.891885   83784 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.182:22: connect: no route to host
	E1209 23:53:56.891901   83784 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.182:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-048296" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-825613 -n embed-certs-825613
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-825613 -n embed-certs-825613: exit status 3 (3.16788495s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 23:53:49.975978   83700 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.19:22: connect: no route to host
	E1209 23:53:49.976005   83700 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.19:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-825613 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-825613 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151176583s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.19:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-825613 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-825613 -n embed-certs-825613
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-825613 -n embed-certs-825613: exit status 3 (3.064828313s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 23:53:59.191943   83829 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.19:22: connect: no route to host
	E1209 23:53:59.191964   83829 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.19:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-825613" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-871210 -n default-k8s-diff-port-871210
E1209 23:54:41.039357   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-871210 -n default-k8s-diff-port-871210: exit status 3 (3.168160114s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 23:54:43.479935   84147 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host
	E1209 23:54:43.479957   84147 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-871210 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-871210 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152755055s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-871210 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-871210 -n default-k8s-diff-port-871210
E1209 23:54:50.124736   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-871210 -n default-k8s-diff-port-871210: exit status 3 (3.062970753s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 23:54:52.695979   84229 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host
	E1209 23:54:52.696008   84229 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.54:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-871210" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (743.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-720064 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1209 23:55:26.332939   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:31.693372   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:42.483182   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:44.497962   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:48.762510   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:52.174730   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:56:12.046401   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:56:16.467055   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:56:22.192812   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:56:33.103190   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:56:33.136614   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:57:00.807362   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:57:04.405014   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:57:55.058429   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:00.638964   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:12.522462   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:28.186747   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:28.339538   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:38.331544   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:55.888101   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:59:06.035018   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:59:20.545034   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:59:35.596671   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:59:48.246742   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:00:11.199243   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:00:26.333550   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:00:38.900488   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:00:48.761689   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:01:33.102929   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:03:00.638986   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:03:12.522419   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:03:28.185987   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-720064 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m19.994896566s)

                                                
                                                
-- stdout --
	* [old-k8s-version-720064] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-720064" primary control-plane node in "old-k8s-version-720064" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-720064" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:55:25.509412   84547 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:55:25.509516   84547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:25.509524   84547 out.go:358] Setting ErrFile to fd 2...
	I1209 23:55:25.509541   84547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:25.509720   84547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 23:55:25.510225   84547 out.go:352] Setting JSON to false
	I1209 23:55:25.511117   84547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9476,"bootTime":1733779049,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:55:25.511206   84547 start.go:139] virtualization: kvm guest
	I1209 23:55:25.513214   84547 out.go:177] * [old-k8s-version-720064] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:55:25.514823   84547 notify.go:220] Checking for updates...
	I1209 23:55:25.514845   84547 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:55:25.516067   84547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:55:25.517350   84547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:55:25.518678   84547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 23:55:25.520082   84547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:55:25.521432   84547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:55:25.523092   84547 config.go:182] Loaded profile config "old-k8s-version-720064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 23:55:25.523499   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:55:25.523548   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:55:25.538209   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36805
	I1209 23:55:25.538728   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:55:25.539263   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:55:25.539282   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:55:25.539570   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:55:25.539735   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:55:25.541550   84547 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1209 23:55:25.542864   84547 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:55:25.543159   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:55:25.543196   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:55:25.558672   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I1209 23:55:25.559075   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:55:25.559524   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:55:25.559547   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:55:25.559881   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:55:25.560082   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:55:25.595916   84547 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 23:55:25.597365   84547 start.go:297] selected driver: kvm2
	I1209 23:55:25.597380   84547 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:55:25.597473   84547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:55:25.598182   84547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:55:25.598251   84547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 23:55:25.612993   84547 install.go:137] /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1209 23:55:25.613401   84547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:55:25.613430   84547 cni.go:84] Creating CNI manager for ""
	I1209 23:55:25.613473   84547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:55:25.613509   84547 start.go:340] cluster config:
	{Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:55:25.613608   84547 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:55:25.615371   84547 out.go:177] * Starting "old-k8s-version-720064" primary control-plane node in "old-k8s-version-720064" cluster
	I1209 23:55:25.616599   84547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:55:25.616629   84547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1209 23:55:25.616636   84547 cache.go:56] Caching tarball of preloaded images
	I1209 23:55:25.616716   84547 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 23:55:25.616727   84547 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1209 23:55:25.616809   84547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/config.json ...
	I1209 23:55:25.616984   84547 start.go:360] acquireMachinesLock for old-k8s-version-720064: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:59:14.960350   84547 start.go:364] duration metric: took 3m49.343321897s to acquireMachinesLock for "old-k8s-version-720064"
	I1209 23:59:14.960428   84547 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:59:14.960440   84547 fix.go:54] fixHost starting: 
	I1209 23:59:14.960886   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:14.960950   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:14.981976   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37057
	I1209 23:59:14.982425   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:14.982933   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:59:14.982966   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:14.983341   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:14.983587   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:14.983772   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetState
	I1209 23:59:14.985748   84547 fix.go:112] recreateIfNeeded on old-k8s-version-720064: state=Stopped err=<nil>
	I1209 23:59:14.985774   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	W1209 23:59:14.985968   84547 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:59:14.987652   84547 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-720064" ...
	I1209 23:59:14.988869   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .Start
	I1209 23:59:14.989086   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring networks are active...
	I1209 23:59:14.989817   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring network default is active
	I1209 23:59:14.990188   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring network mk-old-k8s-version-720064 is active
	I1209 23:59:14.990640   84547 main.go:141] libmachine: (old-k8s-version-720064) Getting domain xml...
	I1209 23:59:14.991392   84547 main.go:141] libmachine: (old-k8s-version-720064) Creating domain...
	I1209 23:59:16.350842   84547 main.go:141] libmachine: (old-k8s-version-720064) Waiting to get IP...
	I1209 23:59:16.351883   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.352326   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.352393   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.352304   85519 retry.go:31] will retry after 268.149849ms: waiting for machine to come up
	I1209 23:59:16.621742   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.622116   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.622145   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.622066   85519 retry.go:31] will retry after 365.051996ms: waiting for machine to come up
	I1209 23:59:16.988590   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.989124   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.989154   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.989089   85519 retry.go:31] will retry after 441.697933ms: waiting for machine to come up
	I1209 23:59:17.432962   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:17.433453   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:17.433482   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:17.433356   85519 retry.go:31] will retry after 503.173846ms: waiting for machine to come up
	I1209 23:59:17.938107   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:17.938576   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:17.938610   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:17.938533   85519 retry.go:31] will retry after 476.993358ms: waiting for machine to come up
	I1209 23:59:18.417462   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:18.418037   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:18.418064   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:18.417989   85519 retry.go:31] will retry after 732.449849ms: waiting for machine to come up
	I1209 23:59:19.152120   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:19.152680   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:19.152708   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:19.152624   85519 retry.go:31] will retry after 764.17794ms: waiting for machine to come up
	I1209 23:59:19.918630   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:19.919113   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:19.919141   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:19.919066   85519 retry.go:31] will retry after 1.072352346s: waiting for machine to come up
	I1209 23:59:20.992747   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:20.993138   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:20.993196   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:20.993111   85519 retry.go:31] will retry after 1.479639772s: waiting for machine to come up
	I1209 23:59:22.474889   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:22.475414   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:22.475445   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:22.475364   85519 retry.go:31] will retry after 2.248463233s: waiting for machine to come up
	I1209 23:59:24.725307   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:24.725765   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:24.725796   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:24.725706   85519 retry.go:31] will retry after 2.803512929s: waiting for machine to come up
	I1209 23:59:27.530567   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:27.531086   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:27.531120   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:27.531022   85519 retry.go:31] will retry after 2.800227475s: waiting for machine to come up
	I1209 23:59:30.333201   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:30.333660   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:30.333684   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:30.333621   85519 retry.go:31] will retry after 3.041733113s: waiting for machine to come up
	I1209 23:59:33.378848   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.379297   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has current primary IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.379326   84547 main.go:141] libmachine: (old-k8s-version-720064) Found IP for machine: 192.168.39.188
	I1209 23:59:33.379351   84547 main.go:141] libmachine: (old-k8s-version-720064) Reserving static IP address...
	I1209 23:59:33.379701   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "old-k8s-version-720064", mac: "52:54:00:a1:91:00", ip: "192.168.39.188"} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.379722   84547 main.go:141] libmachine: (old-k8s-version-720064) Reserved static IP address: 192.168.39.188
	I1209 23:59:33.379743   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | skip adding static IP to network mk-old-k8s-version-720064 - found existing host DHCP lease matching {name: "old-k8s-version-720064", mac: "52:54:00:a1:91:00", ip: "192.168.39.188"}
	I1209 23:59:33.379759   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Getting to WaitForSSH function...
	I1209 23:59:33.379776   84547 main.go:141] libmachine: (old-k8s-version-720064) Waiting for SSH to be available...
	I1209 23:59:33.381697   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.381990   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.382027   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.382137   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Using SSH client type: external
	I1209 23:59:33.382163   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa (-rw-------)
	I1209 23:59:33.382193   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:59:33.382212   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | About to run SSH command:
	I1209 23:59:33.382229   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | exit 0
	I1209 23:59:33.507653   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | SSH cmd err, output: <nil>: 
	I1209 23:59:33.507957   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetConfigRaw
	I1209 23:59:33.508555   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:33.511020   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.511399   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.511431   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.511695   84547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/config.json ...
	I1209 23:59:33.511932   84547 machine.go:93] provisionDockerMachine start ...
	I1209 23:59:33.511958   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:33.512144   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.514383   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.514722   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.514743   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.514876   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.515048   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.515187   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.515349   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.515596   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.515835   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.515850   84547 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:59:33.623370   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:59:33.623407   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.623670   84547 buildroot.go:166] provisioning hostname "old-k8s-version-720064"
	I1209 23:59:33.623708   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.623909   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.626370   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.626749   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.626781   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.626943   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.627172   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.627348   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.627490   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.627666   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.627868   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.627884   84547 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-720064 && echo "old-k8s-version-720064" | sudo tee /etc/hostname
	I1209 23:59:33.749338   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-720064
	
	I1209 23:59:33.749370   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.752401   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.752774   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.752807   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.753000   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.753180   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.753372   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.753531   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.753674   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.753837   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.753853   84547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-720064' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-720064/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-720064' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:59:33.872512   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:59:33.872548   84547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:59:33.872575   84547 buildroot.go:174] setting up certificates
	I1209 23:59:33.872585   84547 provision.go:84] configureAuth start
	I1209 23:59:33.872597   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.872906   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:33.875719   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.876012   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.876051   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.876210   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.878539   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.878857   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.878894   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.879025   84547 provision.go:143] copyHostCerts
	I1209 23:59:33.879087   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:59:33.879100   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:59:33.879166   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:59:33.879254   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:59:33.879262   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:59:33.879283   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:59:33.879338   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:59:33.879346   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:59:33.879365   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:59:33.879414   84547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-720064 san=[127.0.0.1 192.168.39.188 localhost minikube old-k8s-version-720064]
	I1209 23:59:34.211417   84547 provision.go:177] copyRemoteCerts
	I1209 23:59:34.211475   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:59:34.211504   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.214285   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.214649   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.214686   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.214789   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.215006   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.215180   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.215345   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.301399   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:59:34.326216   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:59:34.349136   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 23:59:34.371597   84547 provision.go:87] duration metric: took 498.999263ms to configureAuth
	I1209 23:59:34.371633   84547 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:59:34.371832   84547 config.go:182] Loaded profile config "old-k8s-version-720064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 23:59:34.371902   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.374649   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.374985   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.375031   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.375161   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.375368   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.375541   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.375735   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.375897   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:34.376128   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:34.376152   84547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:59:34.588357   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:59:34.588391   84547 machine.go:96] duration metric: took 1.076442399s to provisionDockerMachine
	I1209 23:59:34.588402   84547 start.go:293] postStartSetup for "old-k8s-version-720064" (driver="kvm2")
	I1209 23:59:34.588412   84547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:59:34.588443   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.588749   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:59:34.588772   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.591413   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.591805   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.591836   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.592001   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.592176   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.592325   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.592431   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.673445   84547 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:59:34.677394   84547 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:59:34.677421   84547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:59:34.677498   84547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:59:34.677600   84547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:59:34.677716   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:59:34.686261   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:34.709412   84547 start.go:296] duration metric: took 120.993275ms for postStartSetup
	I1209 23:59:34.709467   84547 fix.go:56] duration metric: took 19.749026723s for fixHost
	I1209 23:59:34.709495   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.712458   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.712762   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.712796   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.712930   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.713158   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.713335   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.713506   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.713690   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:34.713852   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:34.713862   84547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:59:34.823993   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788774.778513264
	
	I1209 23:59:34.824022   84547 fix.go:216] guest clock: 1733788774.778513264
	I1209 23:59:34.824034   84547 fix.go:229] Guest: 2024-12-09 23:59:34.778513264 +0000 UTC Remote: 2024-12-09 23:59:34.709473288 +0000 UTC m=+249.236536627 (delta=69.039976ms)
	I1209 23:59:34.824067   84547 fix.go:200] guest clock delta is within tolerance: 69.039976ms
	I1209 23:59:34.824075   84547 start.go:83] releasing machines lock for "old-k8s-version-720064", held for 19.863672525s
	I1209 23:59:34.824110   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.824391   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:34.827165   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.827596   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.827629   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.827894   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828467   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828690   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828802   84547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:59:34.828865   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.828888   84547 ssh_runner.go:195] Run: cat /version.json
	I1209 23:59:34.828917   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.831978   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832052   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832514   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.832548   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.832572   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832748   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832751   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.832899   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.832982   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.832986   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.833198   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.833217   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.833370   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.833374   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.949044   84547 ssh_runner.go:195] Run: systemctl --version
	I1209 23:59:34.954999   84547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:59:35.100096   84547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:59:35.105764   84547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:59:35.105852   84547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:59:35.126893   84547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:59:35.126915   84547 start.go:495] detecting cgroup driver to use...
	I1209 23:59:35.126968   84547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:59:35.143236   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:59:35.157019   84547 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:59:35.157098   84547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:59:35.170608   84547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:59:35.184816   84547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:59:35.310043   84547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:59:35.486472   84547 docker.go:233] disabling docker service ...
	I1209 23:59:35.486653   84547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:59:35.501938   84547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:59:35.516997   84547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:59:35.667304   84547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:59:35.821316   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:59:35.836423   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:59:35.854633   84547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1209 23:59:35.854719   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.865592   84547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:59:35.865677   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.877247   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.889007   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.900196   84547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:59:35.912403   84547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:59:35.922919   84547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:59:35.922987   84547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:59:35.939439   84547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:59:35.949715   84547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:36.100115   84547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:59:36.207129   84547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:59:36.207206   84547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:59:36.213672   84547 start.go:563] Will wait 60s for crictl version
	I1209 23:59:36.213758   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:36.218204   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:59:36.255262   84547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:59:36.255367   84547 ssh_runner.go:195] Run: crio --version
	I1209 23:59:36.286277   84547 ssh_runner.go:195] Run: crio --version
	I1209 23:59:36.317334   84547 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1209 23:59:36.318651   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:36.321927   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:36.322336   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:36.322366   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:36.322619   84547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 23:59:36.327938   84547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:36.344152   84547 kubeadm.go:883] updating cluster {Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:59:36.344279   84547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:59:36.344328   84547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:36.391261   84547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 23:59:36.391348   84547 ssh_runner.go:195] Run: which lz4
	I1209 23:59:36.395391   84547 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:59:36.399585   84547 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:59:36.399622   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1209 23:59:37.969251   84547 crio.go:462] duration metric: took 1.573891158s to copy over tarball
	I1209 23:59:37.969362   84547 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:59:41.050494   84547 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081073233s)
	I1209 23:59:41.050524   84547 crio.go:469] duration metric: took 3.081237568s to extract the tarball
	I1209 23:59:41.050533   84547 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:59:41.092694   84547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:41.125264   84547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 23:59:41.125289   84547 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 23:59:41.125360   84547 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.125378   84547 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.125388   84547 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.125360   84547 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.125436   84547 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1209 23:59:41.125466   84547 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.125497   84547 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.125515   84547 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.127285   84547 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.127325   84547 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1209 23:59:41.127376   84547 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.127405   84547 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.127421   84547 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.127293   84547 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.127293   84547 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.127300   84547 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.302277   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.314265   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1209 23:59:41.323683   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.326327   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.345042   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.345550   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.362324   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.367612   84547 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1209 23:59:41.367663   84547 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.367706   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.406644   84547 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1209 23:59:41.406691   84547 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1209 23:59:41.406783   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.450490   84547 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1209 23:59:41.450550   84547 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.450601   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.477332   84547 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1209 23:59:41.477389   84547 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.477438   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.499714   84547 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1209 23:59:41.499757   84547 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.499783   84547 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1209 23:59:41.499806   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.499818   84547 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.499861   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.505128   84547 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1209 23:59:41.505179   84547 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.505205   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.505249   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.505212   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.505306   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.505342   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.507860   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.507923   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.643108   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.644251   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.644273   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.644358   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.644400   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.650732   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.650817   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.777981   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.789388   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.789435   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.789491   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.789561   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.789592   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.802784   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.912370   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.914494   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1209 23:59:41.944460   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1209 23:59:41.944564   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1209 23:59:41.944569   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.944660   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1209 23:59:41.944662   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1209 23:59:41.944726   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1209 23:59:42.104003   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1209 23:59:42.104074   84547 cache_images.go:92] duration metric: took 978.7734ms to LoadCachedImages
	W1209 23:59:42.104176   84547 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1209 23:59:42.104198   84547 kubeadm.go:934] updating node { 192.168.39.188 8443 v1.20.0 crio true true} ...
	I1209 23:59:42.104326   84547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-720064 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.188
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:59:42.104431   84547 ssh_runner.go:195] Run: crio config
	I1209 23:59:42.154016   84547 cni.go:84] Creating CNI manager for ""
	I1209 23:59:42.154041   84547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:42.154050   84547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:59:42.154066   84547 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.188 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-720064 NodeName:old-k8s-version-720064 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.188"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.188 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 23:59:42.154226   84547 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.188
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-720064"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.188
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.188"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:59:42.154292   84547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 23:59:42.164866   84547 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:59:42.164943   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:59:42.175137   84547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1209 23:59:42.192176   84547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:59:42.210695   84547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1209 23:59:42.230364   84547 ssh_runner.go:195] Run: grep 192.168.39.188	control-plane.minikube.internal$ /etc/hosts
	I1209 23:59:42.234239   84547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.188	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:42.246747   84547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:42.379643   84547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:42.396626   84547 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064 for IP: 192.168.39.188
	I1209 23:59:42.396713   84547 certs.go:194] generating shared ca certs ...
	I1209 23:59:42.396746   84547 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:42.396942   84547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:59:42.397007   84547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:59:42.397023   84547 certs.go:256] generating profile certs ...
	I1209 23:59:42.397191   84547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/client.key
	I1209 23:59:42.397270   84547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key.abcbe3fa
	I1209 23:59:42.397330   84547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.key
	I1209 23:59:42.397516   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:59:42.397564   84547 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:59:42.397580   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:59:42.397623   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:59:42.397662   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:59:42.397701   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:59:42.397768   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:42.398726   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:59:42.450053   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:59:42.475685   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:59:42.514396   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:59:42.562383   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 23:59:42.589690   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 23:59:42.614149   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:59:42.647112   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:59:42.671117   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:59:42.694303   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:59:42.722563   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:59:42.753478   84547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:59:42.773673   84547 ssh_runner.go:195] Run: openssl version
	I1209 23:59:42.779930   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:59:42.791531   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.796422   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.796536   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.802386   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:59:42.817058   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:59:42.828570   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.833292   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.833357   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.839679   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:59:42.850729   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:59:42.861699   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.866417   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.866479   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.873602   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:59:42.884398   84547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:59:42.889137   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:59:42.896108   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:59:42.902584   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:59:42.908706   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:59:42.914313   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:59:42.919943   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:59:42.925417   84547 kubeadm.go:392] StartCluster: {Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:42.925546   84547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:59:42.925602   84547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:42.962948   84547 cri.go:89] found id: ""
	I1209 23:59:42.963030   84547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:59:42.973746   84547 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:59:42.973768   84547 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:59:42.973819   84547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:59:42.983593   84547 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:59:42.984528   84547 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-720064" does not appear in /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:59:42.985106   84547 kubeconfig.go:62] /home/jenkins/minikube-integration/19888-18950/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-720064" cluster setting kubeconfig missing "old-k8s-version-720064" context setting]
	I1209 23:59:42.985981   84547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:42.996842   84547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:59:43.007858   84547 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.188
	I1209 23:59:43.007901   84547 kubeadm.go:1160] stopping kube-system containers ...
	I1209 23:59:43.007916   84547 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 23:59:43.007982   84547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:43.046416   84547 cri.go:89] found id: ""
	I1209 23:59:43.046495   84547 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 23:59:43.063226   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:59:43.073919   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:59:43.073946   84547 kubeadm.go:157] found existing configuration files:
	
	I1209 23:59:43.073991   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:59:43.084217   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:59:43.084292   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:59:43.094196   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:59:43.104098   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:59:43.104178   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:59:43.115056   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:59:43.124696   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:59:43.124761   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:59:43.135180   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:59:43.146325   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:59:43.146392   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:59:43.156017   84547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:59:43.166584   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:43.376941   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.198180   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.431002   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.549010   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.644736   84547 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:59:44.644827   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:45.145713   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:45.645297   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:46.145300   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:46.644880   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:47.144955   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:47.645081   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:48.145132   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:48.645278   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.144932   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.645874   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:50.145061   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:50.645171   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:51.144908   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:51.645198   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:52.145700   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:52.645665   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.145782   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.645678   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:54.145521   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:54.645825   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:55.145578   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:55.645853   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.145844   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.644899   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:57.145431   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:57.645625   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:58.145096   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:58.645933   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:59.145181   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:59.645624   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:00.145874   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:00.644886   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:01.145063   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:01.645932   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.145743   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.645396   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:03.145007   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:03.645927   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:04.145876   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:04.645825   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:05.145906   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:05.644919   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:06.145142   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:06.645240   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:07.144864   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:07.645218   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.145246   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.645965   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:09.145663   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:09.645731   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:10.145638   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:10.645462   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.145646   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.645921   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:12.145804   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:12.644924   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:13.145055   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:13.645811   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:14.144877   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:14.645709   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.145827   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.645243   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:16.145027   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:16.645018   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:17.145001   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:17.644893   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:18.145189   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:18.645579   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.144946   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.645832   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:20.145634   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:20.645094   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:21.145179   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:21.645829   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.145159   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.645390   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:23.145663   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:23.645205   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:24.145625   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:24.645299   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:25.144971   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:25.644977   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:26.145967   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:26.645297   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:27.144972   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:27.645030   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.145227   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.645104   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:29.144957   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:29.645516   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:30.145343   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:30.645218   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:31.145393   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:31.645952   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:32.145695   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:32.644910   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.146012   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.645636   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:34.145664   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:34.645813   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:35.145365   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:35.645823   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:36.145410   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:36.645665   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:37.144994   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:37.645701   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.144880   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.645735   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:39.145806   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:39.645695   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:40.145692   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:40.644935   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:41.145320   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:41.645470   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:42.145714   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:42.644961   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:43.145687   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:43.644990   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:44.144958   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:44.645780   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:44.645870   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:44.680200   84547 cri.go:89] found id: ""
	I1210 00:00:44.680321   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.680334   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:44.680343   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:44.680413   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:44.715278   84547 cri.go:89] found id: ""
	I1210 00:00:44.715305   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.715312   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:44.715318   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:44.715377   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:44.749908   84547 cri.go:89] found id: ""
	I1210 00:00:44.749933   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.749941   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:44.749946   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:44.750008   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:44.786851   84547 cri.go:89] found id: ""
	I1210 00:00:44.786882   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.786893   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:44.786901   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:44.786966   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:44.830060   84547 cri.go:89] found id: ""
	I1210 00:00:44.830106   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.830117   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:44.830125   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:44.830191   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:44.865525   84547 cri.go:89] found id: ""
	I1210 00:00:44.865560   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.865571   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:44.865579   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:44.865643   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:44.902529   84547 cri.go:89] found id: ""
	I1210 00:00:44.902565   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.902575   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:44.902584   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:44.902647   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:44.938876   84547 cri.go:89] found id: ""
	I1210 00:00:44.938904   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.938914   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:44.938925   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:44.938939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:44.992533   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:44.992570   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:45.005990   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:45.006020   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:45.116774   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:45.116797   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:45.116810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:45.187376   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:45.187411   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:47.728964   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:47.742500   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:47.742560   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:47.775751   84547 cri.go:89] found id: ""
	I1210 00:00:47.775783   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.775793   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:47.775799   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:47.775848   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:47.810202   84547 cri.go:89] found id: ""
	I1210 00:00:47.810228   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.810235   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:47.810241   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:47.810302   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:47.844686   84547 cri.go:89] found id: ""
	I1210 00:00:47.844730   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.844739   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:47.844745   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:47.844802   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:47.884076   84547 cri.go:89] found id: ""
	I1210 00:00:47.884108   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.884119   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:47.884127   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:47.884188   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:47.923697   84547 cri.go:89] found id: ""
	I1210 00:00:47.923722   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.923729   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:47.923734   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:47.923791   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:47.955790   84547 cri.go:89] found id: ""
	I1210 00:00:47.955816   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.955824   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:47.955829   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:47.955890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:48.021501   84547 cri.go:89] found id: ""
	I1210 00:00:48.021529   84547 logs.go:282] 0 containers: []
	W1210 00:00:48.021537   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:48.021543   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:48.021592   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:48.054645   84547 cri.go:89] found id: ""
	I1210 00:00:48.054675   84547 logs.go:282] 0 containers: []
	W1210 00:00:48.054688   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:48.054699   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:48.054714   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:48.135706   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:48.135729   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:48.135746   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:48.212397   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:48.212438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:48.254002   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:48.254033   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:48.305858   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:48.305891   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:50.819644   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:50.834153   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:50.834233   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:50.872357   84547 cri.go:89] found id: ""
	I1210 00:00:50.872391   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.872402   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:50.872409   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:50.872471   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:50.909793   84547 cri.go:89] found id: ""
	I1210 00:00:50.909826   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.909839   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:50.909848   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:50.909915   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:50.947167   84547 cri.go:89] found id: ""
	I1210 00:00:50.947206   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.947219   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:50.947228   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:50.947304   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:50.987194   84547 cri.go:89] found id: ""
	I1210 00:00:50.987219   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.987228   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:50.987234   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:50.987287   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:51.027433   84547 cri.go:89] found id: ""
	I1210 00:00:51.027464   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.027476   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:51.027483   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:51.027586   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:51.062156   84547 cri.go:89] found id: ""
	I1210 00:00:51.062186   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.062200   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:51.062208   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:51.062267   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:51.099161   84547 cri.go:89] found id: ""
	I1210 00:00:51.099187   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.099195   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:51.099202   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:51.099249   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:51.134400   84547 cri.go:89] found id: ""
	I1210 00:00:51.134432   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.134445   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:51.134459   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:51.134474   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:51.171842   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:51.171869   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:51.222815   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:51.222854   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:51.236499   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:51.236537   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:51.307835   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:51.307856   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:51.307871   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:53.888771   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:53.902167   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:53.902234   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:53.940724   84547 cri.go:89] found id: ""
	I1210 00:00:53.940748   84547 logs.go:282] 0 containers: []
	W1210 00:00:53.940756   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:53.940767   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:53.940823   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:53.975076   84547 cri.go:89] found id: ""
	I1210 00:00:53.975111   84547 logs.go:282] 0 containers: []
	W1210 00:00:53.975122   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:53.975128   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:53.975191   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:54.011123   84547 cri.go:89] found id: ""
	I1210 00:00:54.011149   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.011157   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:54.011162   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:54.011207   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:54.043594   84547 cri.go:89] found id: ""
	I1210 00:00:54.043620   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.043628   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:54.043633   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:54.043679   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:54.076172   84547 cri.go:89] found id: ""
	I1210 00:00:54.076208   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.076219   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:54.076227   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:54.076292   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:54.110646   84547 cri.go:89] found id: ""
	I1210 00:00:54.110679   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.110691   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:54.110711   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:54.110784   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:54.145901   84547 cri.go:89] found id: ""
	I1210 00:00:54.145929   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.145940   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:54.145947   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:54.146007   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:54.178589   84547 cri.go:89] found id: ""
	I1210 00:00:54.178618   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.178629   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:54.178639   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:54.178652   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:54.231005   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:54.231040   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:54.244583   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:54.244608   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:54.318759   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:54.318787   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:54.318800   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:54.395975   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:54.396012   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:56.969699   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:56.985159   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:56.985232   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:57.020768   84547 cri.go:89] found id: ""
	I1210 00:00:57.020798   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.020807   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:57.020812   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:57.020861   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:57.055796   84547 cri.go:89] found id: ""
	I1210 00:00:57.055826   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.055834   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:57.055839   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:57.055897   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:57.089345   84547 cri.go:89] found id: ""
	I1210 00:00:57.089375   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.089385   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:57.089392   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:57.089460   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:57.123969   84547 cri.go:89] found id: ""
	I1210 00:00:57.123993   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.124004   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:57.124012   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:57.124075   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:57.158744   84547 cri.go:89] found id: ""
	I1210 00:00:57.158769   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.158777   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:57.158783   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:57.158842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:57.203933   84547 cri.go:89] found id: ""
	I1210 00:00:57.203954   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.203962   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:57.203968   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:57.204025   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:57.238180   84547 cri.go:89] found id: ""
	I1210 00:00:57.238213   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.238224   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:57.238231   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:57.238287   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:57.273583   84547 cri.go:89] found id: ""
	I1210 00:00:57.273612   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.273623   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:57.273633   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:57.273648   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:57.345992   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:57.346019   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:57.346035   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:57.428335   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:57.428369   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:57.466437   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:57.466472   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:57.517064   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:57.517099   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:00.030599   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:00.045151   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:00.045229   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:00.083050   84547 cri.go:89] found id: ""
	I1210 00:01:00.083074   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.083085   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:00.083093   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:00.083152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:00.119082   84547 cri.go:89] found id: ""
	I1210 00:01:00.119107   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.119120   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:00.119126   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:00.119185   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:00.166888   84547 cri.go:89] found id: ""
	I1210 00:01:00.166921   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.166931   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:00.166939   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:00.166998   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:00.209504   84547 cri.go:89] found id: ""
	I1210 00:01:00.209526   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.209533   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:00.209539   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:00.209595   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:00.247649   84547 cri.go:89] found id: ""
	I1210 00:01:00.247672   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.247680   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:00.247686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:00.247736   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:00.294419   84547 cri.go:89] found id: ""
	I1210 00:01:00.294445   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.294455   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:00.294463   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:00.294526   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:00.328634   84547 cri.go:89] found id: ""
	I1210 00:01:00.328667   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.328677   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:00.328684   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:00.328751   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:00.364667   84547 cri.go:89] found id: ""
	I1210 00:01:00.364696   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.364724   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:00.364733   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:00.364745   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:00.377518   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:00.377552   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:00.449145   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:00.449166   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:00.449178   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:00.529462   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:00.529499   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:00.570471   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:00.570503   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:03.122835   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:03.136329   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:03.136405   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:03.170352   84547 cri.go:89] found id: ""
	I1210 00:01:03.170380   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.170388   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:03.170393   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:03.170454   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:03.204339   84547 cri.go:89] found id: ""
	I1210 00:01:03.204368   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.204379   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:03.204386   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:03.204448   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:03.237516   84547 cri.go:89] found id: ""
	I1210 00:01:03.237559   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.237572   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:03.237579   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:03.237641   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:03.273379   84547 cri.go:89] found id: ""
	I1210 00:01:03.273407   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.273416   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:03.273421   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:03.274017   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:03.308987   84547 cri.go:89] found id: ""
	I1210 00:01:03.309026   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.309038   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:03.309046   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:03.309102   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:03.341913   84547 cri.go:89] found id: ""
	I1210 00:01:03.341943   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.341954   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:03.341961   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:03.342009   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:03.375403   84547 cri.go:89] found id: ""
	I1210 00:01:03.375429   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.375437   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:03.375442   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:03.375494   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:03.408352   84547 cri.go:89] found id: ""
	I1210 00:01:03.408380   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.408387   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:03.408397   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:03.408409   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:03.483789   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:03.483831   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:03.532021   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:03.532050   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:03.585095   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:03.585129   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:03.600062   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:03.600089   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:03.671633   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:06.172345   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:06.199809   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:06.199897   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:06.240690   84547 cri.go:89] found id: ""
	I1210 00:01:06.240749   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.240760   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:06.240769   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:06.240822   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:06.275743   84547 cri.go:89] found id: ""
	I1210 00:01:06.275770   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.275779   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:06.275786   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:06.275848   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:06.310399   84547 cri.go:89] found id: ""
	I1210 00:01:06.310427   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.310438   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:06.310445   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:06.310507   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:06.345278   84547 cri.go:89] found id: ""
	I1210 00:01:06.345308   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.345320   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:06.345328   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:06.345394   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:06.381926   84547 cri.go:89] found id: ""
	I1210 00:01:06.381961   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.381988   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:06.381994   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:06.382064   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:06.415341   84547 cri.go:89] found id: ""
	I1210 00:01:06.415367   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.415377   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:06.415385   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:06.415438   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:06.449701   84547 cri.go:89] found id: ""
	I1210 00:01:06.449733   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.449743   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:06.449750   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:06.449814   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:06.484271   84547 cri.go:89] found id: ""
	I1210 00:01:06.484298   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.484307   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:06.484317   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:06.484333   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:06.570279   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:06.570313   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:06.607812   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:06.607845   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:06.658206   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:06.658243   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:06.670949   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:06.670978   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:06.745785   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:09.246367   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:09.259390   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:09.259462   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:09.291811   84547 cri.go:89] found id: ""
	I1210 00:01:09.291841   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.291853   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:09.291865   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:09.291922   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:09.326996   84547 cri.go:89] found id: ""
	I1210 00:01:09.327027   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.327038   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:09.327045   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:09.327109   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:09.361026   84547 cri.go:89] found id: ""
	I1210 00:01:09.361062   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.361073   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:09.361081   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:09.361152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:09.397116   84547 cri.go:89] found id: ""
	I1210 00:01:09.397148   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.397159   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:09.397166   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:09.397225   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:09.428989   84547 cri.go:89] found id: ""
	I1210 00:01:09.429025   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.429039   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:09.429046   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:09.429111   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:09.462491   84547 cri.go:89] found id: ""
	I1210 00:01:09.462535   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.462547   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:09.462572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:09.462652   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:09.502705   84547 cri.go:89] found id: ""
	I1210 00:01:09.502728   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.502735   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:09.502740   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:09.502798   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:09.538721   84547 cri.go:89] found id: ""
	I1210 00:01:09.538744   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.538754   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:09.538766   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:09.538779   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:09.590732   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:09.590768   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:09.604928   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:09.604960   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:09.681457   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:09.681484   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:09.681498   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:09.758116   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:09.758149   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:12.307016   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:12.320284   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:12.320371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:12.351984   84547 cri.go:89] found id: ""
	I1210 00:01:12.352007   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.352016   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:12.352024   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:12.352086   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:12.387797   84547 cri.go:89] found id: ""
	I1210 00:01:12.387829   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.387840   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:12.387848   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:12.387924   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:12.417934   84547 cri.go:89] found id: ""
	I1210 00:01:12.417968   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.417979   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:12.417987   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:12.418048   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:12.451771   84547 cri.go:89] found id: ""
	I1210 00:01:12.451802   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.451815   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:12.451822   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:12.451890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:12.485007   84547 cri.go:89] found id: ""
	I1210 00:01:12.485037   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.485048   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:12.485055   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:12.485117   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:12.527851   84547 cri.go:89] found id: ""
	I1210 00:01:12.527879   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.527895   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:12.527905   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:12.527967   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:12.560341   84547 cri.go:89] found id: ""
	I1210 00:01:12.560364   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.560372   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:12.560377   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:12.560428   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:12.593118   84547 cri.go:89] found id: ""
	I1210 00:01:12.593143   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.593150   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:12.593158   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:12.593169   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:12.658694   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:12.658717   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:12.658749   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:12.739742   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:12.739776   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:12.780949   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:12.780977   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:12.829570   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:12.829607   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:15.344239   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:15.358424   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:15.358527   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:15.390712   84547 cri.go:89] found id: ""
	I1210 00:01:15.390744   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.390757   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:15.390764   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:15.390847   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:15.424198   84547 cri.go:89] found id: ""
	I1210 00:01:15.424226   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.424234   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:15.424239   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:15.424306   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:15.458340   84547 cri.go:89] found id: ""
	I1210 00:01:15.458363   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.458370   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:15.458377   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:15.458422   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:15.491468   84547 cri.go:89] found id: ""
	I1210 00:01:15.491497   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.491507   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:15.491520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:15.491597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:15.528405   84547 cri.go:89] found id: ""
	I1210 00:01:15.528437   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.528448   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:15.528455   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:15.528517   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:15.562966   84547 cri.go:89] found id: ""
	I1210 00:01:15.562995   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.563005   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:15.563012   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:15.563063   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:15.595815   84547 cri.go:89] found id: ""
	I1210 00:01:15.595838   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.595845   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:15.595850   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:15.595907   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:15.631296   84547 cri.go:89] found id: ""
	I1210 00:01:15.631322   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.631333   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:15.631347   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:15.631362   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:15.680177   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:15.680213   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:15.693685   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:15.693724   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:15.760285   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:15.760312   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:15.760326   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:15.837814   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:15.837855   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:18.377112   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:18.390167   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:18.390230   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:18.424095   84547 cri.go:89] found id: ""
	I1210 00:01:18.424127   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.424140   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:18.424150   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:18.424216   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:18.456840   84547 cri.go:89] found id: ""
	I1210 00:01:18.456868   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.456876   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:18.456882   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:18.456938   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:18.492109   84547 cri.go:89] found id: ""
	I1210 00:01:18.492134   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.492145   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:18.492153   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:18.492212   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:18.530445   84547 cri.go:89] found id: ""
	I1210 00:01:18.530472   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.530480   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:18.530486   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:18.530549   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:18.567036   84547 cri.go:89] found id: ""
	I1210 00:01:18.567060   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.567070   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:18.567077   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:18.567136   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:18.606829   84547 cri.go:89] found id: ""
	I1210 00:01:18.606853   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.606863   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:18.606870   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:18.606927   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:18.646032   84547 cri.go:89] found id: ""
	I1210 00:01:18.646061   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.646070   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:18.646075   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:18.646127   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:18.683969   84547 cri.go:89] found id: ""
	I1210 00:01:18.683997   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.684008   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:18.684019   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:18.684036   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:18.735760   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:18.735807   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:18.751689   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:18.751724   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:18.823252   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:18.823272   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:18.823287   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:18.908071   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:18.908110   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:21.445415   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:21.458091   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:21.458152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:21.492615   84547 cri.go:89] found id: ""
	I1210 00:01:21.492646   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.492657   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:21.492664   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:21.492718   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:21.529564   84547 cri.go:89] found id: ""
	I1210 00:01:21.529586   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.529594   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:21.529599   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:21.529669   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:21.566409   84547 cri.go:89] found id: ""
	I1210 00:01:21.566441   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.566450   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:21.566456   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:21.566528   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:21.601783   84547 cri.go:89] found id: ""
	I1210 00:01:21.601815   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.601827   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:21.601835   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:21.601895   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:21.635740   84547 cri.go:89] found id: ""
	I1210 00:01:21.635762   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.635769   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:21.635775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:21.635831   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:21.667777   84547 cri.go:89] found id: ""
	I1210 00:01:21.667806   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.667826   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:21.667834   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:21.667894   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:21.704355   84547 cri.go:89] found id: ""
	I1210 00:01:21.704380   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.704388   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:21.704398   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:21.704457   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:21.737365   84547 cri.go:89] found id: ""
	I1210 00:01:21.737398   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.737410   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:21.737422   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:21.737438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:21.751394   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:21.751434   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:21.823217   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:21.823240   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:21.823255   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:21.910097   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:21.910131   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:21.950087   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:21.950123   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:24.504626   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:24.517920   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:24.517997   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:24.551766   84547 cri.go:89] found id: ""
	I1210 00:01:24.551806   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.551814   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:24.551821   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:24.551880   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:24.588210   84547 cri.go:89] found id: ""
	I1210 00:01:24.588246   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.588256   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:24.588263   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:24.588341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:24.620569   84547 cri.go:89] found id: ""
	I1210 00:01:24.620598   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.620607   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:24.620613   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:24.620673   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:24.657527   84547 cri.go:89] found id: ""
	I1210 00:01:24.657550   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.657558   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:24.657564   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:24.657636   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:24.689382   84547 cri.go:89] found id: ""
	I1210 00:01:24.689410   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.689418   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:24.689423   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:24.689475   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:24.727170   84547 cri.go:89] found id: ""
	I1210 00:01:24.727207   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.727224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:24.727230   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:24.727280   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:24.760733   84547 cri.go:89] found id: ""
	I1210 00:01:24.760759   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.760769   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:24.760775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:24.760842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:24.794750   84547 cri.go:89] found id: ""
	I1210 00:01:24.794782   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.794791   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:24.794799   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:24.794810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:24.847403   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:24.847441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:24.862240   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:24.862273   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:24.936373   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:24.936396   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:24.936409   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:25.011126   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:25.011160   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:27.551134   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:27.564526   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:27.564665   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:27.600652   84547 cri.go:89] found id: ""
	I1210 00:01:27.600677   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.600685   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:27.600690   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:27.600753   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:27.643593   84547 cri.go:89] found id: ""
	I1210 00:01:27.643625   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.643636   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:27.643649   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:27.643705   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:27.680633   84547 cri.go:89] found id: ""
	I1210 00:01:27.680664   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.680676   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:27.680684   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:27.680748   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:27.721790   84547 cri.go:89] found id: ""
	I1210 00:01:27.721824   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.721832   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:27.721837   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:27.721901   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:27.756462   84547 cri.go:89] found id: ""
	I1210 00:01:27.756492   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.756504   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:27.756512   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:27.756574   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:27.792692   84547 cri.go:89] found id: ""
	I1210 00:01:27.792728   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.792838   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:27.792851   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:27.792912   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:27.830065   84547 cri.go:89] found id: ""
	I1210 00:01:27.830098   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.830107   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:27.830116   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:27.830182   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:27.867516   84547 cri.go:89] found id: ""
	I1210 00:01:27.867557   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.867580   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:27.867595   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:27.867611   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:27.917622   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:27.917660   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:27.931687   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:27.931717   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:28.003827   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:28.003860   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:28.003876   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:28.081375   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:28.081410   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:30.622885   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:30.635990   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:30.636065   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:30.671418   84547 cri.go:89] found id: ""
	I1210 00:01:30.671453   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.671465   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:30.671475   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:30.671548   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:30.707168   84547 cri.go:89] found id: ""
	I1210 00:01:30.707203   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.707214   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:30.707222   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:30.707275   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:30.741349   84547 cri.go:89] found id: ""
	I1210 00:01:30.741377   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.741386   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:30.741395   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:30.741449   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:30.774952   84547 cri.go:89] found id: ""
	I1210 00:01:30.774985   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.774997   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:30.775005   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:30.775062   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:30.809596   84547 cri.go:89] found id: ""
	I1210 00:01:30.809623   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.809631   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:30.809636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:30.809699   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:30.844256   84547 cri.go:89] found id: ""
	I1210 00:01:30.844288   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.844300   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:30.844308   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:30.844371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:30.876086   84547 cri.go:89] found id: ""
	I1210 00:01:30.876113   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.876124   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:30.876131   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:30.876195   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:30.910858   84547 cri.go:89] found id: ""
	I1210 00:01:30.910884   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.910895   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:30.910905   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:30.910920   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:30.990855   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:30.990876   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:30.990888   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:31.069186   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:31.069221   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:31.109653   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:31.109689   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:31.164962   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:31.165001   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:33.679784   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:33.692222   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:33.692310   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:33.727937   84547 cri.go:89] found id: ""
	I1210 00:01:33.727960   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.727967   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:33.727973   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:33.728022   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:33.762122   84547 cri.go:89] found id: ""
	I1210 00:01:33.762151   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.762162   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:33.762171   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:33.762237   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:33.793855   84547 cri.go:89] found id: ""
	I1210 00:01:33.793883   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.793894   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:33.793903   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:33.793961   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:33.827240   84547 cri.go:89] found id: ""
	I1210 00:01:33.827280   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.827292   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:33.827300   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:33.827366   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:33.863705   84547 cri.go:89] found id: ""
	I1210 00:01:33.863728   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.863738   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:33.863743   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:33.863792   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:33.897189   84547 cri.go:89] found id: ""
	I1210 00:01:33.897213   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.897224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:33.897229   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:33.897282   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:33.930001   84547 cri.go:89] found id: ""
	I1210 00:01:33.930034   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.930044   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:33.930052   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:33.930113   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:33.967330   84547 cri.go:89] found id: ""
	I1210 00:01:33.967359   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.967367   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:33.967378   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:33.967390   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:34.051952   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:34.051996   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:34.087729   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:34.087762   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:34.137879   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:34.137915   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:34.151989   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:34.152025   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:34.225587   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:36.726065   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:36.740513   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:36.740594   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:36.774959   84547 cri.go:89] found id: ""
	I1210 00:01:36.774991   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.775000   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:36.775005   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:36.775070   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:36.808888   84547 cri.go:89] found id: ""
	I1210 00:01:36.808921   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.808934   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:36.808941   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:36.809001   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:36.841695   84547 cri.go:89] found id: ""
	I1210 00:01:36.841727   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.841738   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:36.841748   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:36.841840   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:36.877479   84547 cri.go:89] found id: ""
	I1210 00:01:36.877509   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.877522   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:36.877531   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:36.877600   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:36.909232   84547 cri.go:89] found id: ""
	I1210 00:01:36.909257   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.909265   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:36.909271   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:36.909328   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:36.945858   84547 cri.go:89] found id: ""
	I1210 00:01:36.945892   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.945904   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:36.945912   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:36.945981   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:36.978559   84547 cri.go:89] found id: ""
	I1210 00:01:36.978592   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.978604   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:36.978611   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:36.978674   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:37.015523   84547 cri.go:89] found id: ""
	I1210 00:01:37.015555   84547 logs.go:282] 0 containers: []
	W1210 00:01:37.015587   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:37.015598   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:37.015613   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:37.094825   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:37.094876   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:37.134442   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:37.134470   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:37.184691   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:37.184728   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:37.198801   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:37.198828   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:37.268415   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:39.768790   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:39.783106   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:39.783184   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:39.816629   84547 cri.go:89] found id: ""
	I1210 00:01:39.816655   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.816665   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:39.816672   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:39.816749   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:39.851518   84547 cri.go:89] found id: ""
	I1210 00:01:39.851550   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.851581   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:39.851590   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:39.851648   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:39.887790   84547 cri.go:89] found id: ""
	I1210 00:01:39.887827   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.887842   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:39.887852   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:39.887924   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:39.922230   84547 cri.go:89] found id: ""
	I1210 00:01:39.922255   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.922262   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:39.922268   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:39.922332   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:39.957142   84547 cri.go:89] found id: ""
	I1210 00:01:39.957174   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.957184   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:39.957192   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:39.957254   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:40.004583   84547 cri.go:89] found id: ""
	I1210 00:01:40.004609   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.004618   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:40.004624   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:40.004675   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:40.038278   84547 cri.go:89] found id: ""
	I1210 00:01:40.038300   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.038308   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:40.038313   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:40.038366   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:40.071920   84547 cri.go:89] found id: ""
	I1210 00:01:40.071947   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.071954   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:40.071963   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:40.071973   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:40.142005   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:40.142033   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:40.142049   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:40.219413   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:40.219452   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:40.260786   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:40.260822   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:40.315943   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:40.315988   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:42.832592   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:42.847532   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:42.847630   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:42.879313   84547 cri.go:89] found id: ""
	I1210 00:01:42.879341   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.879348   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:42.879354   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:42.879403   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:42.914839   84547 cri.go:89] found id: ""
	I1210 00:01:42.914866   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.914877   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:42.914884   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:42.914947   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:42.949514   84547 cri.go:89] found id: ""
	I1210 00:01:42.949553   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.949564   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:42.949572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:42.949632   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:42.987974   84547 cri.go:89] found id: ""
	I1210 00:01:42.988004   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.988021   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:42.988029   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:42.988087   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:43.022835   84547 cri.go:89] found id: ""
	I1210 00:01:43.022860   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.022867   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:43.022873   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:43.022921   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:43.055940   84547 cri.go:89] found id: ""
	I1210 00:01:43.055967   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.055975   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:43.055981   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:43.056030   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:43.088728   84547 cri.go:89] found id: ""
	I1210 00:01:43.088754   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.088762   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:43.088769   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:43.088827   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:43.122809   84547 cri.go:89] found id: ""
	I1210 00:01:43.122842   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.122853   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:43.122865   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:43.122881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:43.172243   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:43.172277   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:43.186566   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:43.186596   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:43.264301   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:43.264327   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:43.264341   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:43.339804   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:43.339848   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:45.881356   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:45.894702   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:45.894779   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:45.928927   84547 cri.go:89] found id: ""
	I1210 00:01:45.928951   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.928958   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:45.928964   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:45.929009   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:45.965271   84547 cri.go:89] found id: ""
	I1210 00:01:45.965303   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.965315   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:45.965323   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:45.965392   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:45.999059   84547 cri.go:89] found id: ""
	I1210 00:01:45.999082   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.999090   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:45.999095   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:45.999140   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:46.034425   84547 cri.go:89] found id: ""
	I1210 00:01:46.034456   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.034468   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:46.034476   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:46.034529   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:46.067946   84547 cri.go:89] found id: ""
	I1210 00:01:46.067970   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.067986   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:46.067993   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:46.068056   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:46.101671   84547 cri.go:89] found id: ""
	I1210 00:01:46.101699   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.101710   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:46.101718   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:46.101783   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:46.135860   84547 cri.go:89] found id: ""
	I1210 00:01:46.135885   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.135893   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:46.135898   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:46.135948   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:46.177398   84547 cri.go:89] found id: ""
	I1210 00:01:46.177439   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.177450   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:46.177461   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:46.177476   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:46.248134   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:46.248160   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:46.248175   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:46.323652   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:46.323688   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:46.364017   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:46.364044   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:46.429480   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:46.429524   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:48.956518   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:48.970209   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:48.970294   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:49.008933   84547 cri.go:89] found id: ""
	I1210 00:01:49.008966   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.008977   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:49.008986   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:49.009050   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:49.044802   84547 cri.go:89] found id: ""
	I1210 00:01:49.044833   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.044843   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:49.044850   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:49.044921   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:49.077484   84547 cri.go:89] found id: ""
	I1210 00:01:49.077517   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.077525   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:49.077530   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:49.077576   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:49.110144   84547 cri.go:89] found id: ""
	I1210 00:01:49.110174   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.110186   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:49.110193   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:49.110255   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:49.142591   84547 cri.go:89] found id: ""
	I1210 00:01:49.142622   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.142633   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:49.142646   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:49.142709   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:49.175456   84547 cri.go:89] found id: ""
	I1210 00:01:49.175486   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.175497   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:49.175505   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:49.175603   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:49.208341   84547 cri.go:89] found id: ""
	I1210 00:01:49.208368   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.208379   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:49.208387   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:49.208445   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:49.241486   84547 cri.go:89] found id: ""
	I1210 00:01:49.241509   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.241518   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:49.241615   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:49.241647   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:49.280023   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:49.280051   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:49.328822   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:49.328858   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:49.343076   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:49.343104   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:49.418010   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:49.418037   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:49.418051   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:52.004053   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:52.017350   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:52.017418   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:52.055617   84547 cri.go:89] found id: ""
	I1210 00:01:52.055641   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.055648   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:52.055654   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:52.055712   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:52.088601   84547 cri.go:89] found id: ""
	I1210 00:01:52.088629   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.088637   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:52.088642   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:52.088694   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:52.121880   84547 cri.go:89] found id: ""
	I1210 00:01:52.121912   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.121922   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:52.121928   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:52.121986   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:52.161281   84547 cri.go:89] found id: ""
	I1210 00:01:52.161321   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.161334   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:52.161341   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:52.161406   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:52.222768   84547 cri.go:89] found id: ""
	I1210 00:01:52.222793   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.222800   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:52.222806   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:52.222862   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:52.271441   84547 cri.go:89] found id: ""
	I1210 00:01:52.271465   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.271473   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:52.271479   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:52.271526   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:52.311128   84547 cri.go:89] found id: ""
	I1210 00:01:52.311152   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.311160   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:52.311165   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:52.311211   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:52.348862   84547 cri.go:89] found id: ""
	I1210 00:01:52.348885   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.348892   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:52.348900   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:52.348913   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:52.401280   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:52.401324   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:52.415532   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:52.415580   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:52.484956   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:52.484979   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:52.484994   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:52.565102   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:52.565137   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:55.106446   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:55.121756   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:55.121846   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:55.158601   84547 cri.go:89] found id: ""
	I1210 00:01:55.158632   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.158643   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:55.158650   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:55.158712   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:55.192424   84547 cri.go:89] found id: ""
	I1210 00:01:55.192454   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.192464   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:55.192471   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:55.192530   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:55.226178   84547 cri.go:89] found id: ""
	I1210 00:01:55.226204   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.226213   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:55.226222   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:55.226285   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:55.264123   84547 cri.go:89] found id: ""
	I1210 00:01:55.264148   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.264161   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:55.264169   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:55.264226   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:55.302476   84547 cri.go:89] found id: ""
	I1210 00:01:55.302503   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.302512   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:55.302520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:55.302597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:55.342308   84547 cri.go:89] found id: ""
	I1210 00:01:55.342341   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.342352   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:55.342360   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:55.342419   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:55.382203   84547 cri.go:89] found id: ""
	I1210 00:01:55.382226   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.382232   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:55.382238   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:55.382286   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:55.421381   84547 cri.go:89] found id: ""
	I1210 00:01:55.421409   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.421421   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:55.421432   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:55.421449   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:55.473758   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:55.473793   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:55.488138   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:55.488166   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:55.567216   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:55.567240   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:55.567251   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:55.648276   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:55.648319   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:58.185245   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:58.199015   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:58.199091   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:58.232327   84547 cri.go:89] found id: ""
	I1210 00:01:58.232352   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.232360   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:58.232368   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:58.232436   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:58.270312   84547 cri.go:89] found id: ""
	I1210 00:01:58.270342   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.270353   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:58.270360   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:58.270420   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:58.303389   84547 cri.go:89] found id: ""
	I1210 00:01:58.303415   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.303422   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:58.303427   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:58.303486   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:58.338703   84547 cri.go:89] found id: ""
	I1210 00:01:58.338735   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.338747   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:58.338755   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:58.338817   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:58.376727   84547 cri.go:89] found id: ""
	I1210 00:01:58.376759   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.376770   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:58.376779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:58.376841   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:58.410192   84547 cri.go:89] found id: ""
	I1210 00:01:58.410217   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.410224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:58.410230   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:58.410286   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:58.449772   84547 cri.go:89] found id: ""
	I1210 00:01:58.449794   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.449802   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:58.449807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:58.449859   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:58.484285   84547 cri.go:89] found id: ""
	I1210 00:01:58.484316   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.484328   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:58.484339   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:58.484356   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:58.538402   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:58.538438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:58.551361   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:58.551391   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:58.613809   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:58.613836   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:58.613850   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:58.689606   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:58.689640   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:01.230924   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:01.244878   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:01.244990   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:01.280475   84547 cri.go:89] found id: ""
	I1210 00:02:01.280501   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.280509   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:01.280514   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:01.280561   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:01.323042   84547 cri.go:89] found id: ""
	I1210 00:02:01.323065   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.323071   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:01.323077   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:01.323124   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:01.356146   84547 cri.go:89] found id: ""
	I1210 00:02:01.356171   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.356181   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:01.356190   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:01.356247   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:01.392542   84547 cri.go:89] found id: ""
	I1210 00:02:01.392567   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.392577   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:01.392584   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:01.392721   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:01.426232   84547 cri.go:89] found id: ""
	I1210 00:02:01.426257   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.426268   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:01.426275   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:01.426341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:01.459754   84547 cri.go:89] found id: ""
	I1210 00:02:01.459786   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.459798   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:01.459806   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:01.459865   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:01.492406   84547 cri.go:89] found id: ""
	I1210 00:02:01.492435   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.492445   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:01.492450   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:01.492499   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:01.532012   84547 cri.go:89] found id: ""
	I1210 00:02:01.532034   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.532042   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:01.532049   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:01.532060   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:01.583145   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:01.583181   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:01.596910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:01.596939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:01.670480   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:01.670506   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:01.670534   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:01.748001   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:01.748041   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:04.291065   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:04.304507   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:04.304587   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:04.341673   84547 cri.go:89] found id: ""
	I1210 00:02:04.341700   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.341713   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:04.341720   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:04.341772   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:04.379743   84547 cri.go:89] found id: ""
	I1210 00:02:04.379776   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.379787   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:04.379795   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:04.379856   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:04.413532   84547 cri.go:89] found id: ""
	I1210 00:02:04.413562   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.413573   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:04.413588   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:04.413648   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:04.449192   84547 cri.go:89] found id: ""
	I1210 00:02:04.449221   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.449231   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:04.449238   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:04.449324   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:04.484638   84547 cri.go:89] found id: ""
	I1210 00:02:04.484666   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.484677   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:04.484686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:04.484745   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:04.524854   84547 cri.go:89] found id: ""
	I1210 00:02:04.524889   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.524903   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:04.524912   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:04.524976   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:04.564697   84547 cri.go:89] found id: ""
	I1210 00:02:04.564726   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.564737   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:04.564748   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:04.564797   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:04.603506   84547 cri.go:89] found id: ""
	I1210 00:02:04.603534   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.603544   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:04.603554   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:04.603583   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:04.653025   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:04.653062   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:04.666833   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:04.666878   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:04.745491   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:04.745513   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:04.745526   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:04.825267   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:04.825304   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:07.365419   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:07.378968   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:07.379030   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:07.412830   84547 cri.go:89] found id: ""
	I1210 00:02:07.412859   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.412868   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:07.412873   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:07.412938   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:07.445622   84547 cri.go:89] found id: ""
	I1210 00:02:07.445661   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.445674   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:07.445682   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:07.445742   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:07.480431   84547 cri.go:89] found id: ""
	I1210 00:02:07.480466   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.480474   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:07.480480   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:07.480533   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:07.518741   84547 cri.go:89] found id: ""
	I1210 00:02:07.518776   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.518790   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:07.518797   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:07.518860   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:07.552193   84547 cri.go:89] found id: ""
	I1210 00:02:07.552216   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.552223   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:07.552229   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:07.552275   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:07.585762   84547 cri.go:89] found id: ""
	I1210 00:02:07.585784   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.585792   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:07.585798   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:07.585843   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:07.618619   84547 cri.go:89] found id: ""
	I1210 00:02:07.618645   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.618653   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:07.618659   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:07.618709   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:07.653377   84547 cri.go:89] found id: ""
	I1210 00:02:07.653418   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.653428   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:07.653440   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:07.653456   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:07.709366   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:07.709401   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:07.723762   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:07.723792   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:07.804849   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:07.804869   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:07.804886   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:07.887117   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:07.887154   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:10.423120   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:10.436563   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:10.436628   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:10.470622   84547 cri.go:89] found id: ""
	I1210 00:02:10.470650   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.470658   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:10.470664   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:10.470735   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:10.506211   84547 cri.go:89] found id: ""
	I1210 00:02:10.506238   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.506250   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:10.506257   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:10.506368   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:10.541846   84547 cri.go:89] found id: ""
	I1210 00:02:10.541871   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.541879   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:10.541885   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:10.541952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:10.581391   84547 cri.go:89] found id: ""
	I1210 00:02:10.581416   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.581427   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:10.581435   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:10.581503   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:10.615172   84547 cri.go:89] found id: ""
	I1210 00:02:10.615206   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.615216   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:10.615223   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:10.615289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:10.650791   84547 cri.go:89] found id: ""
	I1210 00:02:10.650813   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.650821   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:10.650826   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:10.650876   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:10.685428   84547 cri.go:89] found id: ""
	I1210 00:02:10.685452   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.685460   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:10.685466   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:10.685524   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:10.719139   84547 cri.go:89] found id: ""
	I1210 00:02:10.719174   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.719186   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:10.719196   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:10.719211   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:10.732045   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:10.732073   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:10.805084   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:10.805111   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:10.805127   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:10.888301   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:10.888337   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:10.926005   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:10.926033   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:13.479317   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:13.494021   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:13.494089   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:13.527753   84547 cri.go:89] found id: ""
	I1210 00:02:13.527787   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.527799   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:13.527806   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:13.527862   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:13.560574   84547 cri.go:89] found id: ""
	I1210 00:02:13.560607   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.560618   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:13.560625   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:13.560688   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:13.595507   84547 cri.go:89] found id: ""
	I1210 00:02:13.595551   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.595584   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:13.595592   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:13.595657   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:13.629848   84547 cri.go:89] found id: ""
	I1210 00:02:13.629873   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.629884   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:13.629892   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:13.629952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:13.662407   84547 cri.go:89] found id: ""
	I1210 00:02:13.662436   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.662447   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:13.662454   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:13.662509   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:13.694892   84547 cri.go:89] found id: ""
	I1210 00:02:13.694921   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.694940   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:13.694949   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:13.695013   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:13.733248   84547 cri.go:89] found id: ""
	I1210 00:02:13.733327   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.733349   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:13.733358   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:13.733426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:13.771853   84547 cri.go:89] found id: ""
	I1210 00:02:13.771884   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.771894   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:13.771906   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:13.771920   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:13.846886   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:13.846913   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:13.846928   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:13.929722   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:13.929758   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:13.968401   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:13.968427   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:14.019770   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:14.019811   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:16.532794   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:16.547084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:16.547172   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:16.584046   84547 cri.go:89] found id: ""
	I1210 00:02:16.584073   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.584084   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:16.584091   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:16.584150   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:16.615981   84547 cri.go:89] found id: ""
	I1210 00:02:16.616012   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.616023   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:16.616030   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:16.616094   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:16.647939   84547 cri.go:89] found id: ""
	I1210 00:02:16.647967   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.647979   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:16.647986   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:16.648048   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:16.682585   84547 cri.go:89] found id: ""
	I1210 00:02:16.682620   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.682632   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:16.682640   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:16.682695   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:16.718593   84547 cri.go:89] found id: ""
	I1210 00:02:16.718620   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.718628   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:16.718634   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:16.718687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:16.752513   84547 cri.go:89] found id: ""
	I1210 00:02:16.752536   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.752543   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:16.752549   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:16.752598   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:16.787679   84547 cri.go:89] found id: ""
	I1210 00:02:16.787702   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.787710   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:16.787715   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:16.787777   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:16.821275   84547 cri.go:89] found id: ""
	I1210 00:02:16.821297   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.821305   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:16.821312   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:16.821322   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:16.872500   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:16.872533   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:16.885185   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:16.885212   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:16.962658   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:16.962679   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:16.962694   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:17.039689   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:17.039726   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:19.578060   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:19.590601   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:19.590675   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:19.622145   84547 cri.go:89] found id: ""
	I1210 00:02:19.622170   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.622179   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:19.622184   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:19.622231   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:19.653204   84547 cri.go:89] found id: ""
	I1210 00:02:19.653232   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.653243   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:19.653250   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:19.653317   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:19.685104   84547 cri.go:89] found id: ""
	I1210 00:02:19.685137   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.685148   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:19.685156   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:19.685213   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:19.719087   84547 cri.go:89] found id: ""
	I1210 00:02:19.719113   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.719121   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:19.719126   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:19.719176   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:19.753210   84547 cri.go:89] found id: ""
	I1210 00:02:19.753239   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.753250   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:19.753258   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:19.753317   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:19.787608   84547 cri.go:89] found id: ""
	I1210 00:02:19.787635   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.787645   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:19.787653   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:19.787718   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:19.823111   84547 cri.go:89] found id: ""
	I1210 00:02:19.823142   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.823154   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:19.823161   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:19.823221   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:19.858274   84547 cri.go:89] found id: ""
	I1210 00:02:19.858301   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.858312   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:19.858323   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:19.858344   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:19.905386   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:19.905420   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:19.918995   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:19.919026   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:19.990676   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:19.990700   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:19.990716   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:20.064396   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:20.064435   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:22.604477   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:22.617408   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:22.617487   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:22.650144   84547 cri.go:89] found id: ""
	I1210 00:02:22.650178   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.650189   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:22.650197   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:22.650293   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:22.684342   84547 cri.go:89] found id: ""
	I1210 00:02:22.684367   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.684375   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:22.684380   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:22.684429   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:22.718168   84547 cri.go:89] found id: ""
	I1210 00:02:22.718194   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.718204   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:22.718211   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:22.718271   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:22.755192   84547 cri.go:89] found id: ""
	I1210 00:02:22.755222   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.755232   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:22.755240   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:22.755297   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:22.793095   84547 cri.go:89] found id: ""
	I1210 00:02:22.793129   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.793141   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:22.793149   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:22.793209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:22.831116   84547 cri.go:89] found id: ""
	I1210 00:02:22.831146   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.831157   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:22.831164   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:22.831235   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:22.869652   84547 cri.go:89] found id: ""
	I1210 00:02:22.869686   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.869704   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:22.869709   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:22.869756   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:22.907480   84547 cri.go:89] found id: ""
	I1210 00:02:22.907504   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.907513   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:22.907520   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:22.907533   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:22.983880   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:22.983902   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:22.983915   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:23.062840   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:23.062880   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:23.101427   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:23.101476   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:23.153861   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:23.153893   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:25.666908   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:25.680047   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:25.680125   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:25.715821   84547 cri.go:89] found id: ""
	I1210 00:02:25.715853   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.715865   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:25.715873   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:25.715931   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:25.748191   84547 cri.go:89] found id: ""
	I1210 00:02:25.748222   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.748232   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:25.748239   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:25.748295   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:25.786474   84547 cri.go:89] found id: ""
	I1210 00:02:25.786498   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.786505   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:25.786511   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:25.786569   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:25.819282   84547 cri.go:89] found id: ""
	I1210 00:02:25.819319   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.819330   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:25.819337   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:25.819400   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:25.858064   84547 cri.go:89] found id: ""
	I1210 00:02:25.858091   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.858100   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:25.858106   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:25.858169   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:25.893335   84547 cri.go:89] found id: ""
	I1210 00:02:25.893362   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.893373   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:25.893380   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:25.893439   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:25.927225   84547 cri.go:89] found id: ""
	I1210 00:02:25.927254   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.927265   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:25.927272   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:25.927341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:25.963678   84547 cri.go:89] found id: ""
	I1210 00:02:25.963715   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.963725   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:25.963738   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:25.963756   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:25.994462   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:25.994488   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:26.061394   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:26.061442   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:26.061458   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:26.135152   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:26.135187   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:26.171961   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:26.171994   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:28.723326   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:28.735721   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:28.735787   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:28.769493   84547 cri.go:89] found id: ""
	I1210 00:02:28.769516   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.769525   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:28.769530   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:28.769582   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:28.805594   84547 cri.go:89] found id: ""
	I1210 00:02:28.805641   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.805652   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:28.805658   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:28.805704   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:28.838982   84547 cri.go:89] found id: ""
	I1210 00:02:28.839007   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.839015   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:28.839020   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:28.839072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:28.876607   84547 cri.go:89] found id: ""
	I1210 00:02:28.876627   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.876635   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:28.876641   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:28.876700   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:28.910352   84547 cri.go:89] found id: ""
	I1210 00:02:28.910379   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.910386   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:28.910392   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:28.910464   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:28.943896   84547 cri.go:89] found id: ""
	I1210 00:02:28.943917   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.943924   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:28.943930   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:28.943978   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:28.976554   84547 cri.go:89] found id: ""
	I1210 00:02:28.976583   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.976590   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:28.976596   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:28.976644   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:29.011334   84547 cri.go:89] found id: ""
	I1210 00:02:29.011357   84547 logs.go:282] 0 containers: []
	W1210 00:02:29.011364   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:29.011372   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:29.011384   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:29.061418   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:29.061464   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:29.074887   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:29.074913   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:29.147240   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:29.147261   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:29.147272   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:29.225058   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:29.225094   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:31.763432   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:31.777257   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:31.777359   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:31.812558   84547 cri.go:89] found id: ""
	I1210 00:02:31.812588   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.812598   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:31.812606   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:31.812668   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:31.847042   84547 cri.go:89] found id: ""
	I1210 00:02:31.847065   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.847082   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:31.847088   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:31.847135   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:31.881182   84547 cri.go:89] found id: ""
	I1210 00:02:31.881208   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.881216   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:31.881221   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:31.881272   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:31.917367   84547 cri.go:89] found id: ""
	I1210 00:02:31.917393   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.917401   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:31.917407   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:31.917454   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:31.956836   84547 cri.go:89] found id: ""
	I1210 00:02:31.956868   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.956883   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:31.956893   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:31.956952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:31.993125   84547 cri.go:89] found id: ""
	I1210 00:02:31.993151   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.993160   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:31.993168   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:31.993225   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:32.031648   84547 cri.go:89] found id: ""
	I1210 00:02:32.031679   84547 logs.go:282] 0 containers: []
	W1210 00:02:32.031687   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:32.031692   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:32.031746   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:32.065894   84547 cri.go:89] found id: ""
	I1210 00:02:32.065923   84547 logs.go:282] 0 containers: []
	W1210 00:02:32.065930   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:32.065941   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:32.065957   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:32.133473   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:32.133496   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:32.133508   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:32.213129   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:32.213161   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:32.251424   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:32.251453   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:32.302284   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:32.302323   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:34.815963   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:34.829460   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:34.829543   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:34.865811   84547 cri.go:89] found id: ""
	I1210 00:02:34.865837   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.865847   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:34.865854   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:34.865916   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:34.899185   84547 cri.go:89] found id: ""
	I1210 00:02:34.899211   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.899220   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:34.899227   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:34.899289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:34.932472   84547 cri.go:89] found id: ""
	I1210 00:02:34.932500   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.932509   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:34.932517   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:34.932581   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:34.965817   84547 cri.go:89] found id: ""
	I1210 00:02:34.965846   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.965857   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:34.965866   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:34.965930   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:35.000036   84547 cri.go:89] found id: ""
	I1210 00:02:35.000066   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.000077   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:35.000084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:35.000139   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:35.033808   84547 cri.go:89] found id: ""
	I1210 00:02:35.033839   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.033850   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:35.033857   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:35.033916   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:35.066242   84547 cri.go:89] found id: ""
	I1210 00:02:35.066269   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.066278   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:35.066285   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:35.066349   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:35.100740   84547 cri.go:89] found id: ""
	I1210 00:02:35.100763   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.100771   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:35.100779   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:35.100792   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:35.155483   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:35.155520   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:35.168910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:35.168939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:35.243234   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:35.243252   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:35.243263   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:35.320622   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:35.320657   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:37.855684   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:37.869056   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:37.869156   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:37.903720   84547 cri.go:89] found id: ""
	I1210 00:02:37.903748   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.903759   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:37.903766   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:37.903859   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:37.937741   84547 cri.go:89] found id: ""
	I1210 00:02:37.937780   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.937791   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:37.937808   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:37.937869   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:37.975255   84547 cri.go:89] found id: ""
	I1210 00:02:37.975281   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.975292   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:37.975299   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:37.975359   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:38.017348   84547 cri.go:89] found id: ""
	I1210 00:02:38.017381   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.017393   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:38.017400   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:38.017460   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:38.051039   84547 cri.go:89] found id: ""
	I1210 00:02:38.051065   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.051073   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:38.051079   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:38.051129   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:38.085752   84547 cri.go:89] found id: ""
	I1210 00:02:38.085782   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.085791   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:38.085799   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:38.085858   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:38.119975   84547 cri.go:89] found id: ""
	I1210 00:02:38.120004   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.120014   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:38.120021   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:38.120086   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:38.161467   84547 cri.go:89] found id: ""
	I1210 00:02:38.161499   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.161526   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:38.161537   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:38.161551   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:38.222277   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:38.222314   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:38.239300   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:38.239332   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:38.308997   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:38.309016   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:38.309032   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:38.394064   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:38.394108   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:40.933406   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:40.945862   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:40.945937   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:40.979503   84547 cri.go:89] found id: ""
	I1210 00:02:40.979532   84547 logs.go:282] 0 containers: []
	W1210 00:02:40.979540   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:40.979545   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:40.979619   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:41.016758   84547 cri.go:89] found id: ""
	I1210 00:02:41.016792   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.016803   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:41.016811   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:41.016873   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:41.053562   84547 cri.go:89] found id: ""
	I1210 00:02:41.053593   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.053601   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:41.053607   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:41.053667   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:41.086713   84547 cri.go:89] found id: ""
	I1210 00:02:41.086745   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.086757   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:41.086767   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:41.086830   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:41.122901   84547 cri.go:89] found id: ""
	I1210 00:02:41.122935   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.122945   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:41.122952   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:41.123011   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:41.156306   84547 cri.go:89] found id: ""
	I1210 00:02:41.156337   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.156355   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:41.156362   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:41.156423   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:41.189840   84547 cri.go:89] found id: ""
	I1210 00:02:41.189871   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.189882   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:41.189890   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:41.189946   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:41.223019   84547 cri.go:89] found id: ""
	I1210 00:02:41.223051   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.223061   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:41.223072   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:41.223088   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:41.275608   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:41.275640   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:41.289181   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:41.289210   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:41.358375   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:41.358404   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:41.358420   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:41.440214   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:41.440250   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:43.980600   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:43.993110   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:43.993165   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:44.026688   84547 cri.go:89] found id: ""
	I1210 00:02:44.026721   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.026732   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:44.026741   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:44.026796   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:44.062914   84547 cri.go:89] found id: ""
	I1210 00:02:44.062936   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.062943   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:44.062948   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:44.062999   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:44.105974   84547 cri.go:89] found id: ""
	I1210 00:02:44.106001   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.106009   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:44.106014   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:44.106061   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:44.140239   84547 cri.go:89] found id: ""
	I1210 00:02:44.140265   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.140274   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:44.140280   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:44.140338   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:44.175754   84547 cri.go:89] found id: ""
	I1210 00:02:44.175785   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.175796   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:44.175803   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:44.175870   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:44.211663   84547 cri.go:89] found id: ""
	I1210 00:02:44.211694   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.211705   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:44.211712   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:44.211776   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:44.244796   84547 cri.go:89] found id: ""
	I1210 00:02:44.244821   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.244831   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:44.244837   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:44.244898   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:44.282491   84547 cri.go:89] found id: ""
	I1210 00:02:44.282515   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.282528   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:44.282549   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:44.282562   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:44.335284   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:44.335328   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:44.349489   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:44.349530   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:44.418643   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:44.418668   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:44.418682   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:44.493901   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:44.493932   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:47.033132   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:47.046322   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:47.046403   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:47.078490   84547 cri.go:89] found id: ""
	I1210 00:02:47.078521   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.078533   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:47.078541   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:47.078602   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:47.111457   84547 cri.go:89] found id: ""
	I1210 00:02:47.111479   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.111487   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:47.111492   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:47.111538   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:47.144643   84547 cri.go:89] found id: ""
	I1210 00:02:47.144678   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.144689   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:47.144696   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:47.144757   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:47.178106   84547 cri.go:89] found id: ""
	I1210 00:02:47.178131   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.178141   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:47.178148   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:47.178213   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:47.215670   84547 cri.go:89] found id: ""
	I1210 00:02:47.215697   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.215712   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:47.215718   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:47.215767   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:47.248916   84547 cri.go:89] found id: ""
	I1210 00:02:47.248941   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.248948   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:47.248953   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:47.249002   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:47.287632   84547 cri.go:89] found id: ""
	I1210 00:02:47.287660   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.287671   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:47.287680   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:47.287745   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:47.327064   84547 cri.go:89] found id: ""
	I1210 00:02:47.327094   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.327103   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:47.327112   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:47.327126   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:47.341132   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:47.341176   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:47.417100   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:47.417121   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:47.417134   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:47.502612   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:47.502648   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:47.541312   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:47.541339   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:50.095403   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:50.108145   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:50.108202   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:50.140424   84547 cri.go:89] found id: ""
	I1210 00:02:50.140451   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.140462   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:50.140472   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:50.140532   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:50.173836   84547 cri.go:89] found id: ""
	I1210 00:02:50.173859   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.173866   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:50.173872   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:50.173928   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:50.213916   84547 cri.go:89] found id: ""
	I1210 00:02:50.213937   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.213944   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:50.213949   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:50.213997   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:50.246854   84547 cri.go:89] found id: ""
	I1210 00:02:50.246889   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.246899   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:50.246907   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:50.246956   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:50.281416   84547 cri.go:89] found id: ""
	I1210 00:02:50.281448   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.281456   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:50.281462   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:50.281511   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:50.313263   84547 cri.go:89] found id: ""
	I1210 00:02:50.313296   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.313308   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:50.313318   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:50.313385   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:50.347421   84547 cri.go:89] found id: ""
	I1210 00:02:50.347453   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.347463   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:50.347470   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:50.347544   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:50.383111   84547 cri.go:89] found id: ""
	I1210 00:02:50.383134   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.383142   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:50.383151   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:50.383162   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:50.421982   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:50.422013   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:50.475478   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:50.475520   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:50.489202   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:50.489256   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:50.559501   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:50.559539   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:50.559552   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:53.136042   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:53.149149   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:53.149227   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:53.189323   84547 cri.go:89] found id: ""
	I1210 00:02:53.189349   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.189357   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:53.189365   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:53.189425   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:53.225235   84547 cri.go:89] found id: ""
	I1210 00:02:53.225269   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.225281   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:53.225288   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:53.225347   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:53.263465   84547 cri.go:89] found id: ""
	I1210 00:02:53.263492   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.263502   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:53.263510   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:53.263597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:53.297544   84547 cri.go:89] found id: ""
	I1210 00:02:53.297571   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.297583   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:53.297591   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:53.297656   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:53.331731   84547 cri.go:89] found id: ""
	I1210 00:02:53.331755   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.331762   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:53.331767   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:53.331815   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:53.367395   84547 cri.go:89] found id: ""
	I1210 00:02:53.367427   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.367440   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:53.367447   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:53.367511   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:53.403297   84547 cri.go:89] found id: ""
	I1210 00:02:53.403324   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.403332   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:53.403338   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:53.403398   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:53.437126   84547 cri.go:89] found id: ""
	I1210 00:02:53.437150   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.437158   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:53.437166   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:53.437177   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:53.489875   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:53.489914   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:53.503915   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:53.503940   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:53.578086   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:53.578114   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:53.578127   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:53.658463   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:53.658501   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:56.197093   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:56.211959   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:56.212020   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:56.250462   84547 cri.go:89] found id: ""
	I1210 00:02:56.250484   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.250493   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:56.250498   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:56.250552   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:56.286828   84547 cri.go:89] found id: ""
	I1210 00:02:56.286854   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.286865   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:56.286872   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:56.286939   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:56.320750   84547 cri.go:89] found id: ""
	I1210 00:02:56.320779   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.320787   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:56.320793   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:56.320840   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:56.355903   84547 cri.go:89] found id: ""
	I1210 00:02:56.355943   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.355954   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:56.355960   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:56.356026   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:56.395979   84547 cri.go:89] found id: ""
	I1210 00:02:56.396007   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.396018   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:56.396025   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:56.396081   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:56.431481   84547 cri.go:89] found id: ""
	I1210 00:02:56.431506   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.431514   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:56.431520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:56.431594   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:56.470318   84547 cri.go:89] found id: ""
	I1210 00:02:56.470349   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.470356   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:56.470361   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:56.470410   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:56.506388   84547 cri.go:89] found id: ""
	I1210 00:02:56.506411   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.506418   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:56.506429   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:56.506441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:56.558438   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:56.558483   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:56.571981   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:56.572004   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:56.634665   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:56.634697   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:56.634713   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:56.715663   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:56.715697   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:59.255305   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:59.268846   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:59.268930   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:59.302334   84547 cri.go:89] found id: ""
	I1210 00:02:59.302359   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.302366   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:59.302372   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:59.302426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:59.336380   84547 cri.go:89] found id: ""
	I1210 00:02:59.336409   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.336419   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:59.336425   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:59.336492   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:59.370179   84547 cri.go:89] found id: ""
	I1210 00:02:59.370201   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.370210   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:59.370214   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:59.370268   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:59.403199   84547 cri.go:89] found id: ""
	I1210 00:02:59.403222   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.403229   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:59.403236   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:59.403307   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:59.438650   84547 cri.go:89] found id: ""
	I1210 00:02:59.438673   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.438681   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:59.438686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:59.438736   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:59.473166   84547 cri.go:89] found id: ""
	I1210 00:02:59.473191   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.473199   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:59.473205   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:59.473264   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:59.506857   84547 cri.go:89] found id: ""
	I1210 00:02:59.506879   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.506888   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:59.506902   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:59.506963   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:59.551488   84547 cri.go:89] found id: ""
	I1210 00:02:59.551515   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.551526   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:59.551542   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:59.551557   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:59.605032   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:59.605069   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:59.619238   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:59.619271   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:59.690772   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:59.690798   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:59.690813   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:59.774424   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:59.774460   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:02.315240   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:02.329636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:02.329728   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:02.368571   84547 cri.go:89] found id: ""
	I1210 00:03:02.368599   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.368609   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:02.368621   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:02.368687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:02.406107   84547 cri.go:89] found id: ""
	I1210 00:03:02.406136   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.406148   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:02.406155   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:02.406219   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:02.443050   84547 cri.go:89] found id: ""
	I1210 00:03:02.443077   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.443091   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:02.443098   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:02.443146   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:02.484425   84547 cri.go:89] found id: ""
	I1210 00:03:02.484451   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.484461   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:02.484469   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:02.484536   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:02.525624   84547 cri.go:89] found id: ""
	I1210 00:03:02.525647   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.525655   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:02.525661   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:02.525711   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:02.564808   84547 cri.go:89] found id: ""
	I1210 00:03:02.564839   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.564850   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:02.564856   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:02.564907   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:02.598322   84547 cri.go:89] found id: ""
	I1210 00:03:02.598346   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.598354   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:02.598359   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:02.598417   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:02.632256   84547 cri.go:89] found id: ""
	I1210 00:03:02.632310   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.632322   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:02.632334   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:02.632348   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:02.686025   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:02.686063   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:02.701214   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:02.701237   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:02.773453   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:02.773477   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:02.773491   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:02.862017   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:02.862068   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:05.401014   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:05.415009   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:05.415072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:05.454916   84547 cri.go:89] found id: ""
	I1210 00:03:05.454945   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.454955   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:05.454962   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:05.455023   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:05.492955   84547 cri.go:89] found id: ""
	I1210 00:03:05.492981   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.492988   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:05.492995   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:05.493046   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:05.539646   84547 cri.go:89] found id: ""
	I1210 00:03:05.539670   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.539683   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:05.539690   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:05.539755   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:05.574534   84547 cri.go:89] found id: ""
	I1210 00:03:05.574559   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.574567   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:05.574572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:05.574632   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:05.608141   84547 cri.go:89] found id: ""
	I1210 00:03:05.608166   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.608174   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:05.608180   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:05.608243   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:05.643725   84547 cri.go:89] found id: ""
	I1210 00:03:05.643751   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.643759   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:05.643765   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:05.643812   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:05.677730   84547 cri.go:89] found id: ""
	I1210 00:03:05.677760   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.677772   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:05.677779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:05.677846   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:05.712475   84547 cri.go:89] found id: ""
	I1210 00:03:05.712507   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.712519   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:05.712531   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:05.712548   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:05.764532   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:05.764569   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:05.777910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:05.777938   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:05.851070   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:05.851099   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:05.851114   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:05.929518   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:05.929554   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:08.465166   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:08.478666   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:08.478723   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:08.513763   84547 cri.go:89] found id: ""
	I1210 00:03:08.513795   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.513808   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:08.513816   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:08.513877   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:08.548272   84547 cri.go:89] found id: ""
	I1210 00:03:08.548299   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.548306   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:08.548313   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:08.548397   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:08.581773   84547 cri.go:89] found id: ""
	I1210 00:03:08.581801   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.581812   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:08.581820   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:08.581884   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:08.613625   84547 cri.go:89] found id: ""
	I1210 00:03:08.613655   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.613664   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:08.613669   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:08.613716   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:08.644618   84547 cri.go:89] found id: ""
	I1210 00:03:08.644644   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.644652   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:08.644658   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:08.644717   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:08.676840   84547 cri.go:89] found id: ""
	I1210 00:03:08.676867   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.676879   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:08.676887   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:08.676952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:08.709918   84547 cri.go:89] found id: ""
	I1210 00:03:08.709952   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.709961   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:08.709969   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:08.710029   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:08.740842   84547 cri.go:89] found id: ""
	I1210 00:03:08.740871   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.740882   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:08.740893   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:08.740907   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:08.808679   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:08.808705   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:08.808721   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:08.884692   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:08.884733   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:08.920922   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:08.920952   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:08.971404   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:08.971441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:11.484905   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:11.498791   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:11.498865   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:11.535784   84547 cri.go:89] found id: ""
	I1210 00:03:11.535809   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.535820   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:11.535827   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:11.535890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:11.567981   84547 cri.go:89] found id: ""
	I1210 00:03:11.568006   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.568019   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:11.568026   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:11.568083   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:11.601188   84547 cri.go:89] found id: ""
	I1210 00:03:11.601213   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.601224   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:11.601231   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:11.601289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:11.635759   84547 cri.go:89] found id: ""
	I1210 00:03:11.635787   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.635798   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:11.635807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:11.635867   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:11.675512   84547 cri.go:89] found id: ""
	I1210 00:03:11.675537   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.675547   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:11.675554   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:11.675638   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:11.709889   84547 cri.go:89] found id: ""
	I1210 00:03:11.709912   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.709922   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:11.709929   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:11.709987   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:11.744571   84547 cri.go:89] found id: ""
	I1210 00:03:11.744603   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.744610   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:11.744616   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:11.744677   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:11.779091   84547 cri.go:89] found id: ""
	I1210 00:03:11.779123   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.779136   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:11.779146   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:11.779156   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:11.830682   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:11.830726   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:11.844352   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:11.844383   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:11.915025   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:11.915046   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:11.915058   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:11.998581   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:11.998620   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:14.536408   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:14.550250   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:14.550321   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:14.586005   84547 cri.go:89] found id: ""
	I1210 00:03:14.586037   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.586049   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:14.586057   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:14.586118   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:14.620124   84547 cri.go:89] found id: ""
	I1210 00:03:14.620159   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.620170   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:14.620178   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:14.620231   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:14.655079   84547 cri.go:89] found id: ""
	I1210 00:03:14.655104   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.655112   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:14.655117   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:14.655178   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:14.688253   84547 cri.go:89] found id: ""
	I1210 00:03:14.688285   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.688298   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:14.688308   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:14.688371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:14.726904   84547 cri.go:89] found id: ""
	I1210 00:03:14.726932   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.726940   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:14.726945   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:14.726994   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:14.764836   84547 cri.go:89] found id: ""
	I1210 00:03:14.764868   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.764881   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:14.764890   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:14.764955   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:14.803557   84547 cri.go:89] found id: ""
	I1210 00:03:14.803605   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.803616   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:14.803621   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:14.803674   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:14.841070   84547 cri.go:89] found id: ""
	I1210 00:03:14.841102   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.841122   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:14.841137   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:14.841161   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:14.907607   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:14.907631   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:14.907644   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:14.985179   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:14.985209   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:15.022654   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:15.022687   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:15.075224   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:15.075260   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:17.589836   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:17.604045   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:17.604102   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:17.643211   84547 cri.go:89] found id: ""
	I1210 00:03:17.643241   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.643251   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:17.643260   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:17.643320   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:17.675436   84547 cri.go:89] found id: ""
	I1210 00:03:17.675462   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.675472   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:17.675479   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:17.675538   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:17.709428   84547 cri.go:89] found id: ""
	I1210 00:03:17.709465   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.709476   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:17.709484   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:17.709544   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:17.764088   84547 cri.go:89] found id: ""
	I1210 00:03:17.764121   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.764132   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:17.764139   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:17.764200   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:17.797908   84547 cri.go:89] found id: ""
	I1210 00:03:17.797933   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.797944   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:17.797953   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:17.798016   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:17.829580   84547 cri.go:89] found id: ""
	I1210 00:03:17.829609   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.829620   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:17.829628   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:17.829687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:17.865106   84547 cri.go:89] found id: ""
	I1210 00:03:17.865136   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.865145   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:17.865150   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:17.865200   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:17.896757   84547 cri.go:89] found id: ""
	I1210 00:03:17.896791   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.896803   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:17.896814   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:17.896830   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:17.977210   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:17.977244   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:18.016843   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:18.016867   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:18.065263   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:18.065294   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:18.078188   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:18.078212   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:18.148164   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:20.648921   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:20.661458   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:20.661516   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:20.693473   84547 cri.go:89] found id: ""
	I1210 00:03:20.693509   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.693518   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:20.693524   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:20.693576   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:20.726279   84547 cri.go:89] found id: ""
	I1210 00:03:20.726301   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.726309   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:20.726314   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:20.726375   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:20.759895   84547 cri.go:89] found id: ""
	I1210 00:03:20.759922   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.759931   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:20.759936   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:20.759988   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:20.793934   84547 cri.go:89] found id: ""
	I1210 00:03:20.793964   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.793974   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:20.793982   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:20.794049   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:20.828040   84547 cri.go:89] found id: ""
	I1210 00:03:20.828066   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.828077   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:20.828084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:20.828143   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:20.861917   84547 cri.go:89] found id: ""
	I1210 00:03:20.861950   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.861960   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:20.861967   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:20.862028   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:20.896458   84547 cri.go:89] found id: ""
	I1210 00:03:20.896481   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.896489   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:20.896494   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:20.896551   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:20.931014   84547 cri.go:89] found id: ""
	I1210 00:03:20.931044   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.931052   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:20.931061   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:20.931072   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:20.968693   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:20.968718   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:21.021880   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:21.021917   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:21.035848   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:21.035881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:21.104570   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:21.104604   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:21.104621   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:23.679447   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:23.692326   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:23.692426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:23.729773   84547 cri.go:89] found id: ""
	I1210 00:03:23.729796   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.729804   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:23.729809   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:23.729855   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:23.765875   84547 cri.go:89] found id: ""
	I1210 00:03:23.765905   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.765915   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:23.765922   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:23.765984   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:23.800785   84547 cri.go:89] found id: ""
	I1210 00:03:23.800821   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.800831   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:23.800838   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:23.800902   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:23.838137   84547 cri.go:89] found id: ""
	I1210 00:03:23.838160   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.838168   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:23.838173   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:23.838222   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:23.869916   84547 cri.go:89] found id: ""
	I1210 00:03:23.869947   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.869958   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:23.869966   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:23.870027   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:23.903939   84547 cri.go:89] found id: ""
	I1210 00:03:23.903962   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.903969   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:23.903975   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:23.904021   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:23.937092   84547 cri.go:89] found id: ""
	I1210 00:03:23.937119   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.937127   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:23.937133   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:23.937194   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:23.970562   84547 cri.go:89] found id: ""
	I1210 00:03:23.970599   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.970611   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:23.970622   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:23.970641   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:23.983364   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:23.983394   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:24.066624   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:24.066645   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:24.066657   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:24.151466   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:24.151502   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:24.204590   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:24.204615   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:26.774169   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:26.786888   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:26.786964   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:26.820391   84547 cri.go:89] found id: ""
	I1210 00:03:26.820423   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.820433   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:26.820441   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:26.820504   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:26.853927   84547 cri.go:89] found id: ""
	I1210 00:03:26.853955   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.853963   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:26.853971   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:26.854031   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:26.887309   84547 cri.go:89] found id: ""
	I1210 00:03:26.887337   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.887347   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:26.887353   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:26.887415   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:26.923869   84547 cri.go:89] found id: ""
	I1210 00:03:26.923897   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.923908   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:26.923915   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:26.923983   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:26.959919   84547 cri.go:89] found id: ""
	I1210 00:03:26.959952   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.959964   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:26.959971   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:26.960032   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:26.993742   84547 cri.go:89] found id: ""
	I1210 00:03:26.993775   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.993787   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:26.993794   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:26.993853   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:27.026977   84547 cri.go:89] found id: ""
	I1210 00:03:27.027011   84547 logs.go:282] 0 containers: []
	W1210 00:03:27.027021   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:27.027028   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:27.027090   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:27.064135   84547 cri.go:89] found id: ""
	I1210 00:03:27.064164   84547 logs.go:282] 0 containers: []
	W1210 00:03:27.064173   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:27.064181   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:27.064192   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:27.101758   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:27.101784   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:27.155536   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:27.155592   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:27.168891   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:27.168914   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:27.242486   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:27.242511   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:27.242525   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:29.821740   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:29.834536   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:29.834604   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:29.868316   84547 cri.go:89] found id: ""
	I1210 00:03:29.868341   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.868348   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:29.868354   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:29.868402   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:29.902290   84547 cri.go:89] found id: ""
	I1210 00:03:29.902320   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.902331   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:29.902338   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:29.902399   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:29.948750   84547 cri.go:89] found id: ""
	I1210 00:03:29.948780   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.948792   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:29.948800   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:29.948864   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:29.994587   84547 cri.go:89] found id: ""
	I1210 00:03:29.994618   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.994629   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:29.994636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:29.994694   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:30.042216   84547 cri.go:89] found id: ""
	I1210 00:03:30.042243   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.042256   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:30.042264   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:30.042345   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:30.075964   84547 cri.go:89] found id: ""
	I1210 00:03:30.075989   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.075999   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:30.076007   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:30.076072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:30.108647   84547 cri.go:89] found id: ""
	I1210 00:03:30.108676   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.108687   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:30.108695   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:30.108760   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:30.140983   84547 cri.go:89] found id: ""
	I1210 00:03:30.141013   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.141022   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:30.141030   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:30.141040   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:30.198281   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:30.198319   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:30.213820   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:30.213849   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:30.283907   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:30.283927   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:30.283940   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:30.360731   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:30.360768   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:32.904302   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:32.919123   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:32.919209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:32.952847   84547 cri.go:89] found id: ""
	I1210 00:03:32.952879   84547 logs.go:282] 0 containers: []
	W1210 00:03:32.952889   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:32.952897   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:32.952961   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:32.986993   84547 cri.go:89] found id: ""
	I1210 00:03:32.987020   84547 logs.go:282] 0 containers: []
	W1210 00:03:32.987029   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:32.987035   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:32.987085   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:33.027506   84547 cri.go:89] found id: ""
	I1210 00:03:33.027536   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.027548   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:33.027556   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:33.027630   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:33.061553   84547 cri.go:89] found id: ""
	I1210 00:03:33.061588   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.061605   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:33.061613   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:33.061673   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:33.097640   84547 cri.go:89] found id: ""
	I1210 00:03:33.097679   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.097693   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:33.097702   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:33.097783   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:33.131725   84547 cri.go:89] found id: ""
	I1210 00:03:33.131758   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.131768   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:33.131775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:33.131839   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:33.165731   84547 cri.go:89] found id: ""
	I1210 00:03:33.165759   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.165771   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:33.165779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:33.165841   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:33.200288   84547 cri.go:89] found id: ""
	I1210 00:03:33.200310   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.200320   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:33.200338   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:33.200351   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:33.251524   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:33.251577   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:33.266197   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:33.266223   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:33.343529   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:33.343583   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:33.343600   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:33.420133   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:33.420175   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:35.971411   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:35.988993   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:35.989059   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:36.022639   84547 cri.go:89] found id: ""
	I1210 00:03:36.022673   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.022684   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:36.022692   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:36.022758   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:36.055421   84547 cri.go:89] found id: ""
	I1210 00:03:36.055452   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.055461   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:36.055466   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:36.055514   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:36.087690   84547 cri.go:89] found id: ""
	I1210 00:03:36.087721   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.087731   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:36.087738   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:36.087802   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:36.125209   84547 cri.go:89] found id: ""
	I1210 00:03:36.125240   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.125249   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:36.125254   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:36.125304   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:36.157544   84547 cri.go:89] found id: ""
	I1210 00:03:36.157586   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.157599   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:36.157607   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:36.157676   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:36.193415   84547 cri.go:89] found id: ""
	I1210 00:03:36.193448   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.193459   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:36.193466   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:36.193525   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:36.228941   84547 cri.go:89] found id: ""
	I1210 00:03:36.228971   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.228982   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:36.228989   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:36.229052   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:36.264821   84547 cri.go:89] found id: ""
	I1210 00:03:36.264851   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.264862   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:36.264873   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:36.264887   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:36.314841   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:36.314881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:36.328664   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:36.328695   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:36.402929   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:36.402956   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:36.402970   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:36.480270   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:36.480306   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:39.022748   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:39.036323   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:39.036382   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:39.070582   84547 cri.go:89] found id: ""
	I1210 00:03:39.070608   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.070616   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:39.070622   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:39.070690   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:39.108783   84547 cri.go:89] found id: ""
	I1210 00:03:39.108814   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.108825   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:39.108832   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:39.108889   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:39.142300   84547 cri.go:89] found id: ""
	I1210 00:03:39.142330   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.142338   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:39.142344   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:39.142400   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:39.177859   84547 cri.go:89] found id: ""
	I1210 00:03:39.177891   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.177903   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:39.177911   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:39.177965   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:39.212109   84547 cri.go:89] found id: ""
	I1210 00:03:39.212140   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.212152   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:39.212160   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:39.212209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:39.245451   84547 cri.go:89] found id: ""
	I1210 00:03:39.245490   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.245501   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:39.245509   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:39.245569   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:39.278925   84547 cri.go:89] found id: ""
	I1210 00:03:39.278957   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.278967   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:39.278974   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:39.279063   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:39.312322   84547 cri.go:89] found id: ""
	I1210 00:03:39.312350   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.312358   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:39.312367   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:39.312377   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:39.362844   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:39.362882   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:39.375938   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:39.375967   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:39.443649   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:39.443676   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:39.443691   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:39.523722   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:39.523757   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:42.062317   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:42.075094   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:42.075157   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:42.106719   84547 cri.go:89] found id: ""
	I1210 00:03:42.106744   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.106751   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:42.106757   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:42.106805   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:42.141977   84547 cri.go:89] found id: ""
	I1210 00:03:42.142011   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.142022   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:42.142029   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:42.142089   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:42.175117   84547 cri.go:89] found id: ""
	I1210 00:03:42.175151   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.175164   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:42.175172   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:42.175243   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:42.209871   84547 cri.go:89] found id: ""
	I1210 00:03:42.209900   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.209911   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:42.209919   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:42.209982   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:42.252784   84547 cri.go:89] found id: ""
	I1210 00:03:42.252812   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.252822   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:42.252830   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:42.252891   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:42.285934   84547 cri.go:89] found id: ""
	I1210 00:03:42.285961   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.285973   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:42.285980   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:42.286040   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:42.327393   84547 cri.go:89] found id: ""
	I1210 00:03:42.327423   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.327433   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:42.327440   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:42.327498   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:42.362244   84547 cri.go:89] found id: ""
	I1210 00:03:42.362279   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.362289   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:42.362302   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:42.362317   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:42.441656   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:42.441691   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:42.479357   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:42.479397   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:42.533002   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:42.533038   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:42.546485   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:42.546514   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:42.616912   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:45.117156   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:45.131265   84547 kubeadm.go:597] duration metric: took 4m2.157458218s to restartPrimaryControlPlane
	W1210 00:03:45.131350   84547 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 00:03:45.131379   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:03:46.778025   84547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.646620044s)
	I1210 00:03:46.778099   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:03:46.792384   84547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:03:46.802897   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:03:46.813393   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:03:46.813417   84547 kubeadm.go:157] found existing configuration files:
	
	I1210 00:03:46.813473   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:03:46.823016   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:03:46.823093   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:03:46.832912   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:03:46.841961   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:03:46.842019   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:03:46.851160   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:03:46.859879   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:03:46.859938   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:03:46.869192   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:03:46.878423   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:03:46.878487   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:03:46.888103   84547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:03:47.105146   84547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:05:43.088214   84547 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 00:05:43.088352   84547 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 00:05:43.089912   84547 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 00:05:43.089988   84547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:05:43.090054   84547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:05:43.090140   84547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:05:43.090225   84547 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 00:05:43.090302   84547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:05:43.092050   84547 out.go:235]   - Generating certificates and keys ...
	I1210 00:05:43.092141   84547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:05:43.092210   84547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:05:43.092305   84547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:05:43.092392   84547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:05:43.092493   84547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:05:43.092569   84547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:05:43.092680   84547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:05:43.092761   84547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:05:43.092865   84547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:05:43.092975   84547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:05:43.093045   84547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:05:43.093143   84547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:05:43.093188   84547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:05:43.093236   84547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:05:43.093317   84547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:05:43.093402   84547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:05:43.093561   84547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:05:43.093728   84547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:05:43.093785   84547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:05:43.093855   84547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:05:43.095285   84547 out.go:235]   - Booting up control plane ...
	I1210 00:05:43.095396   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:05:43.095469   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:05:43.095525   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:05:43.095630   84547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:05:43.095804   84547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 00:05:43.095873   84547 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 00:05:43.095960   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096155   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096237   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096414   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096486   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096679   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096741   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096904   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096969   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.097122   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.097129   84547 kubeadm.go:310] 
	I1210 00:05:43.097167   84547 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 00:05:43.097202   84547 kubeadm.go:310] 		timed out waiting for the condition
	I1210 00:05:43.097211   84547 kubeadm.go:310] 
	I1210 00:05:43.097251   84547 kubeadm.go:310] 	This error is likely caused by:
	I1210 00:05:43.097280   84547 kubeadm.go:310] 		- The kubelet is not running
	I1210 00:05:43.097373   84547 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 00:05:43.097379   84547 kubeadm.go:310] 
	I1210 00:05:43.097465   84547 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 00:05:43.097495   84547 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 00:05:43.097522   84547 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 00:05:43.097531   84547 kubeadm.go:310] 
	I1210 00:05:43.097619   84547 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 00:05:43.097688   84547 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 00:05:43.097694   84547 kubeadm.go:310] 
	I1210 00:05:43.097794   84547 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 00:05:43.097875   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 00:05:43.097941   84547 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 00:05:43.098038   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 00:05:43.098124   84547 kubeadm.go:310] 
	W1210 00:05:43.098179   84547 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 00:05:43.098224   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:05:48.532537   84547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.434287494s)
	I1210 00:05:48.532615   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:05:48.546394   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:05:48.555650   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:05:48.555669   84547 kubeadm.go:157] found existing configuration files:
	
	I1210 00:05:48.555721   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:05:48.565301   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:05:48.565368   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:05:48.575330   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:05:48.584238   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:05:48.584324   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:05:48.593395   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:05:48.602150   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:05:48.602209   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:05:48.611177   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:05:48.619837   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:05:48.619907   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:05:48.629126   84547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:05:48.827680   84547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:07:44.857816   84547 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 00:07:44.857930   84547 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 00:07:44.859490   84547 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 00:07:44.859542   84547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:07:44.859627   84547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:07:44.859758   84547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:07:44.859894   84547 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 00:07:44.859953   84547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:07:44.861538   84547 out.go:235]   - Generating certificates and keys ...
	I1210 00:07:44.861657   84547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:07:44.861736   84547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:07:44.861809   84547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:07:44.861861   84547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:07:44.861920   84547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:07:44.861969   84547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:07:44.862024   84547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:07:44.862077   84547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:07:44.862143   84547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:07:44.862212   84547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:07:44.862246   84547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:07:44.862293   84547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:07:44.862343   84547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:07:44.862408   84547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:07:44.862505   84547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:07:44.862615   84547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:07:44.862765   84547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:07:44.862897   84547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:07:44.862955   84547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:07:44.863059   84547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:07:44.865485   84547 out.go:235]   - Booting up control plane ...
	I1210 00:07:44.865595   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:07:44.865720   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:07:44.865815   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:07:44.865929   84547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:07:44.866136   84547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 00:07:44.866209   84547 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 00:07:44.866332   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866517   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.866586   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866752   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.866818   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866976   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867042   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.867210   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867323   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.867512   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867522   84547 kubeadm.go:310] 
	I1210 00:07:44.867611   84547 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 00:07:44.867671   84547 kubeadm.go:310] 		timed out waiting for the condition
	I1210 00:07:44.867691   84547 kubeadm.go:310] 
	I1210 00:07:44.867729   84547 kubeadm.go:310] 	This error is likely caused by:
	I1210 00:07:44.867766   84547 kubeadm.go:310] 		- The kubelet is not running
	I1210 00:07:44.867880   84547 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 00:07:44.867895   84547 kubeadm.go:310] 
	I1210 00:07:44.868055   84547 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 00:07:44.868105   84547 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 00:07:44.868138   84547 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 00:07:44.868146   84547 kubeadm.go:310] 
	I1210 00:07:44.868250   84547 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 00:07:44.868354   84547 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 00:07:44.868362   84547 kubeadm.go:310] 
	I1210 00:07:44.868464   84547 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 00:07:44.868540   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 00:07:44.868606   84547 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 00:07:44.868689   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 00:07:44.868741   84547 kubeadm.go:310] 
	I1210 00:07:44.868762   84547 kubeadm.go:394] duration metric: took 8m1.943355039s to StartCluster
	I1210 00:07:44.868799   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:07:44.868852   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:07:44.906641   84547 cri.go:89] found id: ""
	I1210 00:07:44.906667   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.906675   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:07:44.906681   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:07:44.906734   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:07:44.942832   84547 cri.go:89] found id: ""
	I1210 00:07:44.942863   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.942872   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:07:44.942881   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:07:44.942945   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:07:44.978010   84547 cri.go:89] found id: ""
	I1210 00:07:44.978034   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.978042   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:07:44.978047   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:07:44.978108   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:07:45.017066   84547 cri.go:89] found id: ""
	I1210 00:07:45.017089   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.017097   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:07:45.017110   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:07:45.017172   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:07:45.049757   84547 cri.go:89] found id: ""
	I1210 00:07:45.049778   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.049786   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:07:45.049791   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:07:45.049842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:07:45.082754   84547 cri.go:89] found id: ""
	I1210 00:07:45.082789   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.082800   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:07:45.082807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:07:45.082933   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:07:45.117188   84547 cri.go:89] found id: ""
	I1210 00:07:45.117219   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.117227   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:07:45.117233   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:07:45.117302   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:07:45.153747   84547 cri.go:89] found id: ""
	I1210 00:07:45.153776   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.153785   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:07:45.153795   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:07:45.153810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:07:45.207625   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:07:45.207662   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:07:45.220929   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:07:45.220957   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:07:45.291850   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:07:45.291883   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:07:45.291899   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:07:45.397592   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:07:45.397629   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 00:07:45.446755   84547 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1210 00:07:45.446816   84547 out.go:270] * 
	* 
	W1210 00:07:45.446898   84547 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 00:07:45.446921   84547 out.go:270] * 
	* 
	W1210 00:07:45.448145   84547 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 00:07:45.452118   84547 out.go:201] 
	W1210 00:07:45.453335   84547 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 00:07:45.453398   84547 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 00:07:45.453438   84547 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 00:07:45.455704   84547 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-720064 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-720064 -n old-k8s-version-720064
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-720064 -n old-k8s-version-720064: exit status 2 (267.279527ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-720064 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-720064 logs -n 25: (1.579059077s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-030585 sudo cat                              | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo find                             | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo crio                             | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-030585                                       | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-866797 | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | disable-driver-mounts-866797                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:52 UTC |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-048296             | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC | 09 Dec 24 23:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-825613            | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC | 09 Dec 24 23:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-825613                                  | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-871210  | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:52 UTC | 09 Dec 24 23:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:52 UTC |                     |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-720064        | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-048296                  | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-825613                 | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC | 10 Dec 24 00:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-825613                                  | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC | 10 Dec 24 00:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-871210       | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:54 UTC | 10 Dec 24 00:04 UTC |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-720064             | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:55:25
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:55:25.509412   84547 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:55:25.509516   84547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:25.509524   84547 out.go:358] Setting ErrFile to fd 2...
	I1209 23:55:25.509541   84547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:25.509720   84547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 23:55:25.510225   84547 out.go:352] Setting JSON to false
	I1209 23:55:25.511117   84547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9476,"bootTime":1733779049,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:55:25.511206   84547 start.go:139] virtualization: kvm guest
	I1209 23:55:25.513214   84547 out.go:177] * [old-k8s-version-720064] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:55:25.514823   84547 notify.go:220] Checking for updates...
	I1209 23:55:25.514845   84547 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:55:25.516067   84547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:55:25.517350   84547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:55:25.518678   84547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 23:55:25.520082   84547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:55:25.521432   84547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:55:25.523092   84547 config.go:182] Loaded profile config "old-k8s-version-720064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 23:55:25.523499   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:55:25.523548   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:55:25.538209   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36805
	I1209 23:55:25.538728   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:55:25.539263   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:55:25.539282   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:55:25.539570   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:55:25.539735   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:55:25.541550   84547 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1209 23:55:25.542864   84547 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:55:25.543159   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:55:25.543196   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:55:25.558672   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I1209 23:55:25.559075   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:55:25.559524   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:55:25.559547   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:55:25.559881   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:55:25.560082   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:55:25.595916   84547 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 23:55:25.597365   84547 start.go:297] selected driver: kvm2
	I1209 23:55:25.597380   84547 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:55:25.597473   84547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:55:25.598182   84547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:55:25.598251   84547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 23:55:25.612993   84547 install.go:137] /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1209 23:55:25.613401   84547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:55:25.613430   84547 cni.go:84] Creating CNI manager for ""
	I1209 23:55:25.613473   84547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:55:25.613509   84547 start.go:340] cluster config:
	{Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:55:25.613608   84547 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:55:25.615371   84547 out.go:177] * Starting "old-k8s-version-720064" primary control-plane node in "old-k8s-version-720064" cluster
	I1209 23:55:25.371778   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:25.616599   84547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:55:25.616629   84547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1209 23:55:25.616636   84547 cache.go:56] Caching tarball of preloaded images
	I1209 23:55:25.616716   84547 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 23:55:25.616727   84547 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1209 23:55:25.616809   84547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/config.json ...
	I1209 23:55:25.616984   84547 start.go:360] acquireMachinesLock for old-k8s-version-720064: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:55:28.439810   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:34.519811   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:37.591794   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:43.671858   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:46.743891   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:52.823826   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:55.895854   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:01.975845   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:05.047862   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:11.127840   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:14.199834   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:20.279853   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:23.351850   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:29.431829   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:32.503859   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:38.583822   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:41.655831   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:47.735829   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:50.807862   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:56.887827   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:59.959820   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:06.039852   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:09.111843   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:15.191823   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:18.263824   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:24.343852   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:27.415807   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:33.495843   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:36.567854   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:42.647854   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:45.719902   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:51.799842   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:54.871862   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:00.951877   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:04.023859   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:10.103869   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:13.175788   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:19.255805   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:22.327784   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:28.407842   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:31.479864   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:34.483630   83900 start.go:364] duration metric: took 4m35.142298634s to acquireMachinesLock for "embed-certs-825613"
	I1209 23:58:34.483690   83900 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:58:34.483698   83900 fix.go:54] fixHost starting: 
	I1209 23:58:34.484038   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:58:34.484080   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:58:34.499267   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I1209 23:58:34.499717   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:58:34.500208   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:58:34.500236   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:58:34.500552   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:58:34.500747   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:34.500886   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:58:34.502318   83900 fix.go:112] recreateIfNeeded on embed-certs-825613: state=Stopped err=<nil>
	I1209 23:58:34.502340   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	W1209 23:58:34.502480   83900 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:58:34.504290   83900 out.go:177] * Restarting existing kvm2 VM for "embed-certs-825613" ...
	I1209 23:58:34.481505   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:58:34.481545   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:58:34.481889   83859 buildroot.go:166] provisioning hostname "no-preload-048296"
	I1209 23:58:34.481917   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:58:34.482120   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:58:34.483492   83859 machine.go:96] duration metric: took 4m37.418278223s to provisionDockerMachine
	I1209 23:58:34.483534   83859 fix.go:56] duration metric: took 4m37.439725581s for fixHost
	I1209 23:58:34.483542   83859 start.go:83] releasing machines lock for "no-preload-048296", held for 4m37.439753726s
	W1209 23:58:34.483579   83859 start.go:714] error starting host: provision: host is not running
	W1209 23:58:34.483682   83859 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1209 23:58:34.483694   83859 start.go:729] Will try again in 5 seconds ...
	I1209 23:58:34.505520   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Start
	I1209 23:58:34.505676   83900 main.go:141] libmachine: (embed-certs-825613) Ensuring networks are active...
	I1209 23:58:34.506381   83900 main.go:141] libmachine: (embed-certs-825613) Ensuring network default is active
	I1209 23:58:34.506676   83900 main.go:141] libmachine: (embed-certs-825613) Ensuring network mk-embed-certs-825613 is active
	I1209 23:58:34.507046   83900 main.go:141] libmachine: (embed-certs-825613) Getting domain xml...
	I1209 23:58:34.507727   83900 main.go:141] libmachine: (embed-certs-825613) Creating domain...
	I1209 23:58:35.719784   83900 main.go:141] libmachine: (embed-certs-825613) Waiting to get IP...
	I1209 23:58:35.720797   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:35.721325   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:35.721377   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:35.721289   85186 retry.go:31] will retry after 251.225165ms: waiting for machine to come up
	I1209 23:58:35.973801   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:35.974247   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:35.974276   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:35.974205   85186 retry.go:31] will retry after 298.029679ms: waiting for machine to come up
	I1209 23:58:36.273658   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:36.274090   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:36.274119   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:36.274049   85186 retry.go:31] will retry after 467.217404ms: waiting for machine to come up
	I1209 23:58:36.742591   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:36.743027   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:36.743052   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:36.742982   85186 retry.go:31] will retry after 409.741731ms: waiting for machine to come up
	I1209 23:58:37.154543   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:37.154958   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:37.154982   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:37.154916   85186 retry.go:31] will retry after 723.02759ms: waiting for machine to come up
	I1209 23:58:37.878986   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:37.879474   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:37.879507   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:37.879423   85186 retry.go:31] will retry after 767.399861ms: waiting for machine to come up
	I1209 23:58:38.648416   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:38.648903   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:38.648927   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:38.648853   85186 retry.go:31] will retry after 1.06423499s: waiting for machine to come up
	I1209 23:58:39.485363   83859 start.go:360] acquireMachinesLock for no-preload-048296: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:58:39.714711   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:39.715114   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:39.715147   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:39.715058   85186 retry.go:31] will retry after 1.402884688s: waiting for machine to come up
	I1209 23:58:41.119828   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:41.120315   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:41.120359   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:41.120258   85186 retry.go:31] will retry after 1.339874475s: waiting for machine to come up
	I1209 23:58:42.461858   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:42.462314   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:42.462343   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:42.462261   85186 retry.go:31] will retry after 1.455809273s: waiting for machine to come up
	I1209 23:58:43.920097   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:43.920418   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:43.920448   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:43.920371   85186 retry.go:31] will retry after 2.872969061s: waiting for machine to come up
	I1209 23:58:46.796163   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:46.796477   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:46.796499   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:46.796439   85186 retry.go:31] will retry after 2.530677373s: waiting for machine to come up
	I1209 23:58:49.330146   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:49.330601   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:49.330631   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:49.330561   85186 retry.go:31] will retry after 4.492372507s: waiting for machine to come up
	I1209 23:58:53.827982   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.828461   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has current primary IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.828480   83900 main.go:141] libmachine: (embed-certs-825613) Found IP for machine: 192.168.50.19
	I1209 23:58:53.828492   83900 main.go:141] libmachine: (embed-certs-825613) Reserving static IP address...
	I1209 23:58:53.829001   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "embed-certs-825613", mac: "52:54:00:0f:2e:da", ip: "192.168.50.19"} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.829028   83900 main.go:141] libmachine: (embed-certs-825613) Reserved static IP address: 192.168.50.19
	I1209 23:58:53.829051   83900 main.go:141] libmachine: (embed-certs-825613) DBG | skip adding static IP to network mk-embed-certs-825613 - found existing host DHCP lease matching {name: "embed-certs-825613", mac: "52:54:00:0f:2e:da", ip: "192.168.50.19"}
	I1209 23:58:53.829067   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Getting to WaitForSSH function...
	I1209 23:58:53.829080   83900 main.go:141] libmachine: (embed-certs-825613) Waiting for SSH to be available...
	I1209 23:58:53.831079   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.831430   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.831462   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.831630   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Using SSH client type: external
	I1209 23:58:53.831682   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa (-rw-------)
	I1209 23:58:53.831723   83900 main.go:141] libmachine: (embed-certs-825613) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:58:53.831752   83900 main.go:141] libmachine: (embed-certs-825613) DBG | About to run SSH command:
	I1209 23:58:53.831765   83900 main.go:141] libmachine: (embed-certs-825613) DBG | exit 0
	I1209 23:58:53.959446   83900 main.go:141] libmachine: (embed-certs-825613) DBG | SSH cmd err, output: <nil>: 
	I1209 23:58:53.959864   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetConfigRaw
	I1209 23:58:53.960515   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:53.963227   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.963614   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.963644   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.963889   83900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/config.json ...
	I1209 23:58:53.964086   83900 machine.go:93] provisionDockerMachine start ...
	I1209 23:58:53.964103   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:53.964299   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:53.966516   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.966803   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.966834   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.966959   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:53.967149   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:53.967292   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:53.967428   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:53.967599   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:53.967824   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:53.967839   83900 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:58:54.079701   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:58:54.079732   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetMachineName
	I1209 23:58:54.080045   83900 buildroot.go:166] provisioning hostname "embed-certs-825613"
	I1209 23:58:54.080079   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetMachineName
	I1209 23:58:54.080333   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.082930   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.083283   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.083321   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.083420   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.083657   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.083834   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.083974   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.084095   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.084269   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.084282   83900 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-825613 && echo "embed-certs-825613" | sudo tee /etc/hostname
	I1209 23:58:54.208724   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-825613
	
	I1209 23:58:54.208754   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.211739   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.212095   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.212122   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.212297   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.212513   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.212763   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.212936   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.213102   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.213285   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.213311   83900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-825613' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-825613/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-825613' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:58:55.043989   84259 start.go:364] duration metric: took 4m2.194304919s to acquireMachinesLock for "default-k8s-diff-port-871210"
	I1209 23:58:55.044059   84259 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:58:55.044067   84259 fix.go:54] fixHost starting: 
	I1209 23:58:55.044480   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:58:55.044546   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:58:55.060908   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45091
	I1209 23:58:55.061391   84259 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:58:55.061954   84259 main.go:141] libmachine: Using API Version  1
	I1209 23:58:55.061982   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:58:55.062346   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:58:55.062508   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:58:55.062674   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1209 23:58:55.064365   84259 fix.go:112] recreateIfNeeded on default-k8s-diff-port-871210: state=Stopped err=<nil>
	I1209 23:58:55.064386   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	W1209 23:58:55.064564   84259 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:58:55.066707   84259 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-871210" ...
	I1209 23:58:54.331911   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:58:54.331941   83900 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:58:54.331966   83900 buildroot.go:174] setting up certificates
	I1209 23:58:54.331978   83900 provision.go:84] configureAuth start
	I1209 23:58:54.331991   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetMachineName
	I1209 23:58:54.332275   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:54.334597   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.334926   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.334957   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.335117   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.337393   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.337731   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.337763   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.337876   83900 provision.go:143] copyHostCerts
	I1209 23:58:54.337923   83900 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:58:54.337936   83900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:58:54.338006   83900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:58:54.338093   83900 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:58:54.338101   83900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:58:54.338127   83900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:58:54.338204   83900 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:58:54.338212   83900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:58:54.338233   83900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:58:54.338286   83900 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.embed-certs-825613 san=[127.0.0.1 192.168.50.19 embed-certs-825613 localhost minikube]
	I1209 23:58:54.423610   83900 provision.go:177] copyRemoteCerts
	I1209 23:58:54.423679   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:58:54.423706   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.426695   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.427009   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.427041   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.427144   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.427332   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.427516   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.427706   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:54.513991   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 23:58:54.536700   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:58:54.558541   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:58:54.580319   83900 provision.go:87] duration metric: took 248.326924ms to configureAuth
	I1209 23:58:54.580359   83900 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:58:54.580568   83900 config.go:182] Loaded profile config "embed-certs-825613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:58:54.580652   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.583347   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.583720   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.583748   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.583963   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.584171   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.584327   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.584481   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.584621   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.584775   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.584790   83900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:58:54.804814   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:58:54.804860   83900 machine.go:96] duration metric: took 840.759848ms to provisionDockerMachine
	I1209 23:58:54.804876   83900 start.go:293] postStartSetup for "embed-certs-825613" (driver="kvm2")
	I1209 23:58:54.804891   83900 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:58:54.804916   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:54.805269   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:58:54.805297   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.807977   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.808340   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.808370   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.808505   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.808765   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.808945   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.809100   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:54.893741   83900 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:58:54.897936   83900 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:58:54.897961   83900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:58:54.898065   83900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:58:54.898145   83900 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:58:54.898235   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:58:54.907056   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:58:54.929618   83900 start.go:296] duration metric: took 124.726532ms for postStartSetup
	I1209 23:58:54.929664   83900 fix.go:56] duration metric: took 20.445966428s for fixHost
	I1209 23:58:54.929711   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.932476   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.932866   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.932893   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.933108   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.933334   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.933523   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.933638   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.933747   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.933956   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.933967   83900 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:58:55.043841   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788735.012193373
	
	I1209 23:58:55.043863   83900 fix.go:216] guest clock: 1733788735.012193373
	I1209 23:58:55.043870   83900 fix.go:229] Guest: 2024-12-09 23:58:55.012193373 +0000 UTC Remote: 2024-12-09 23:58:54.929689658 +0000 UTC m=+295.728685701 (delta=82.503715ms)
	I1209 23:58:55.043888   83900 fix.go:200] guest clock delta is within tolerance: 82.503715ms
	I1209 23:58:55.043892   83900 start.go:83] releasing machines lock for "embed-certs-825613", held for 20.560223511s
	I1209 23:58:55.043915   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.044198   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:55.046852   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.047246   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:55.047283   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.047446   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.047910   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.048101   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.048198   83900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:58:55.048235   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:55.048424   83900 ssh_runner.go:195] Run: cat /version.json
	I1209 23:58:55.048453   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:55.050905   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051274   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:55.051317   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051339   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051502   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:55.051721   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:55.051772   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:55.051794   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051925   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:55.051980   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:55.052091   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:55.052133   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:55.052263   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:55.052407   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:55.136384   83900 ssh_runner.go:195] Run: systemctl --version
	I1209 23:58:55.154927   83900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:58:55.297639   83900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:58:55.303733   83900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:58:55.303791   83900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:58:55.323858   83900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:58:55.323884   83900 start.go:495] detecting cgroup driver to use...
	I1209 23:58:55.323955   83900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:58:55.342390   83900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:58:55.360400   83900 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:58:55.360463   83900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:58:55.374005   83900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:58:55.387515   83900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:58:55.507866   83900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:58:55.676259   83900 docker.go:233] disabling docker service ...
	I1209 23:58:55.676338   83900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:58:55.695273   83900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:58:55.707909   83900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:58:55.824683   83900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:58:55.934080   83900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:58:55.950700   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:58:55.967756   83900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:58:55.967813   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:55.978028   83900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:58:55.978102   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:55.988960   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:55.999661   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.010253   83900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:58:56.020928   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.030944   83900 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.050251   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.062723   83900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:58:56.072435   83900 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:58:56.072501   83900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:58:56.085332   83900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:58:56.095538   83900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:58:56.214133   83900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:58:56.310023   83900 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:58:56.310107   83900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:58:56.314988   83900 start.go:563] Will wait 60s for crictl version
	I1209 23:58:56.315057   83900 ssh_runner.go:195] Run: which crictl
	I1209 23:58:56.318865   83900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:58:56.356889   83900 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:58:56.356996   83900 ssh_runner.go:195] Run: crio --version
	I1209 23:58:56.387128   83900 ssh_runner.go:195] Run: crio --version
	I1209 23:58:56.417781   83900 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:58:55.068139   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Start
	I1209 23:58:55.068329   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Ensuring networks are active...
	I1209 23:58:55.069026   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Ensuring network default is active
	I1209 23:58:55.069526   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Ensuring network mk-default-k8s-diff-port-871210 is active
	I1209 23:58:55.069970   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Getting domain xml...
	I1209 23:58:55.070725   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Creating domain...
	I1209 23:58:56.366161   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting to get IP...
	I1209 23:58:56.367215   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.367659   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.367720   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:56.367639   85338 retry.go:31] will retry after 212.733452ms: waiting for machine to come up
	I1209 23:58:56.582400   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.582945   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.582973   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:56.582895   85338 retry.go:31] will retry after 380.03081ms: waiting for machine to come up
	I1209 23:58:56.964721   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.965265   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.965296   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:56.965208   85338 retry.go:31] will retry after 429.612511ms: waiting for machine to come up
	I1209 23:58:57.396713   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.397267   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.397298   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:57.397236   85338 retry.go:31] will retry after 595.581233ms: waiting for machine to come up
	I1209 23:58:56.418967   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:56.422317   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:56.422632   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:56.422656   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:56.422925   83900 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1209 23:58:56.426992   83900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:58:56.440436   83900 kubeadm.go:883] updating cluster {Name:embed-certs-825613 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-825613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:58:56.440548   83900 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:58:56.440592   83900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:58:56.475823   83900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:58:56.475907   83900 ssh_runner.go:195] Run: which lz4
	I1209 23:58:56.479747   83900 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:58:56.483850   83900 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:58:56.483900   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 23:58:57.820979   83900 crio.go:462] duration metric: took 1.341265764s to copy over tarball
	I1209 23:58:57.821091   83900 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:58:57.994077   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.994566   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.994590   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:57.994526   85338 retry.go:31] will retry after 728.981009ms: waiting for machine to come up
	I1209 23:58:58.725707   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:58.726209   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:58.726252   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:58.726125   85338 retry.go:31] will retry after 701.836089ms: waiting for machine to come up
	I1209 23:58:59.429804   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:59.430322   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:59.430350   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:59.430266   85338 retry.go:31] will retry after 735.800538ms: waiting for machine to come up
	I1209 23:59:00.167774   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:00.168363   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:00.168397   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:00.168319   85338 retry.go:31] will retry after 1.052511845s: waiting for machine to come up
	I1209 23:59:01.222463   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:01.222960   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:01.222991   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:01.222919   85338 retry.go:31] will retry after 1.655880765s: waiting for machine to come up
	I1209 23:58:59.925304   83900 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.104178296s)
	I1209 23:58:59.925338   83900 crio.go:469] duration metric: took 2.104325305s to extract the tarball
	I1209 23:58:59.925348   83900 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:58:59.962716   83900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:00.008762   83900 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:59:00.008793   83900 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:59:00.008803   83900 kubeadm.go:934] updating node { 192.168.50.19 8443 v1.31.2 crio true true} ...
	I1209 23:59:00.008929   83900 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-825613 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-825613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:59:00.009008   83900 ssh_runner.go:195] Run: crio config
	I1209 23:59:00.050777   83900 cni.go:84] Creating CNI manager for ""
	I1209 23:59:00.050801   83900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:00.050813   83900 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:59:00.050842   83900 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.19 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-825613 NodeName:embed-certs-825613 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:59:00.051001   83900 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-825613"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.19"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.19"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:59:00.051083   83900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:59:00.062278   83900 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:59:00.062354   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:59:00.073157   83900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1209 23:59:00.088933   83900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:59:00.104358   83900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1209 23:59:00.122425   83900 ssh_runner.go:195] Run: grep 192.168.50.19	control-plane.minikube.internal$ /etc/hosts
	I1209 23:59:00.126224   83900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:00.138974   83900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:00.252489   83900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:00.269127   83900 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613 for IP: 192.168.50.19
	I1209 23:59:00.269155   83900 certs.go:194] generating shared ca certs ...
	I1209 23:59:00.269177   83900 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:00.269359   83900 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:59:00.269406   83900 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:59:00.269421   83900 certs.go:256] generating profile certs ...
	I1209 23:59:00.269551   83900 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/client.key
	I1209 23:59:00.269651   83900 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/apiserver.key.12f830e7
	I1209 23:59:00.269724   83900 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/proxy-client.key
	I1209 23:59:00.269901   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:59:00.269954   83900 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:59:00.269968   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:59:00.270007   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:59:00.270056   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:59:00.270096   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:59:00.270161   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:00.271078   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:59:00.322392   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:59:00.351665   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:59:00.378615   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:59:00.401622   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1209 23:59:00.430280   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 23:59:00.453489   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:59:00.478368   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:59:00.501404   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:59:00.524273   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:59:00.546132   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:59:00.568553   83900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:59:00.584196   83900 ssh_runner.go:195] Run: openssl version
	I1209 23:59:00.589905   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:59:00.600107   83900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:59:00.604436   83900 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:59:00.604504   83900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:59:00.609960   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:59:00.620129   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:59:00.629895   83900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:00.633993   83900 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:00.634053   83900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:00.639538   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:59:00.649781   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:59:00.659593   83900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:59:00.663789   83900 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:59:00.663832   83900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:59:00.669033   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:59:00.678881   83900 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:59:00.683006   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:59:00.688631   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:59:00.694212   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:59:00.699930   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:59:00.705400   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:59:00.710668   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:59:00.715982   83900 kubeadm.go:392] StartCluster: {Name:embed-certs-825613 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-825613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:00.716059   83900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:59:00.716094   83900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:00.758093   83900 cri.go:89] found id: ""
	I1209 23:59:00.758164   83900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:59:00.767877   83900 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:59:00.767895   83900 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:59:00.767932   83900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:59:00.777172   83900 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:59:00.778578   83900 kubeconfig.go:125] found "embed-certs-825613" server: "https://192.168.50.19:8443"
	I1209 23:59:00.780691   83900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:59:00.790459   83900 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.19
	I1209 23:59:00.790492   83900 kubeadm.go:1160] stopping kube-system containers ...
	I1209 23:59:00.790507   83900 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 23:59:00.790563   83900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:00.831048   83900 cri.go:89] found id: ""
	I1209 23:59:00.831129   83900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 23:59:00.847797   83900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:59:00.857819   83900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:59:00.857841   83900 kubeadm.go:157] found existing configuration files:
	
	I1209 23:59:00.857877   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:59:00.867108   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:59:00.867159   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:59:00.876742   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:59:00.885861   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:59:00.885942   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:59:00.895651   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:59:00.904027   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:59:00.904092   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:59:00.912956   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:59:00.921647   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:59:00.921704   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:59:00.930445   83900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:59:00.939496   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:01.049561   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.012953   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.234060   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.302650   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.378572   83900 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:59:02.378677   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:02.878755   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:03.379050   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:03.879215   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:02.880033   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:02.880532   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:02.880560   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:02.880488   85338 retry.go:31] will retry after 2.112793708s: waiting for machine to come up
	I1209 23:59:04.994713   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:04.995223   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:04.995254   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:04.995183   85338 retry.go:31] will retry after 2.420394988s: waiting for machine to come up
	I1209 23:59:07.416846   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:07.417309   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:07.417338   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:07.417281   85338 retry.go:31] will retry after 2.479420656s: waiting for machine to come up
	I1209 23:59:04.379735   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:04.406446   83900 api_server.go:72] duration metric: took 2.027872494s to wait for apiserver process to appear ...
	I1209 23:59:04.406480   83900 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:59:04.406506   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:04.407056   83900 api_server.go:269] stopped: https://192.168.50.19:8443/healthz: Get "https://192.168.50.19:8443/healthz": dial tcp 192.168.50.19:8443: connect: connection refused
	I1209 23:59:04.907381   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:06.967755   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:06.967791   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:06.967808   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:07.003261   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:07.003295   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:07.406692   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:07.411139   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:07.411168   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:07.906689   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:07.911136   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:07.911176   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:08.406697   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:08.411051   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 200:
	ok
	I1209 23:59:08.422472   83900 api_server.go:141] control plane version: v1.31.2
	I1209 23:59:08.422506   83900 api_server.go:131] duration metric: took 4.01601823s to wait for apiserver health ...
	I1209 23:59:08.422532   83900 cni.go:84] Creating CNI manager for ""
	I1209 23:59:08.422541   83900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:08.424238   83900 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 23:59:08.425539   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 23:59:08.435424   83900 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 23:59:08.451320   83900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:59:08.461244   83900 system_pods.go:59] 8 kube-system pods found
	I1209 23:59:08.461285   83900 system_pods.go:61] "coredns-7c65d6cfc9-qvtlr" [1ef5707d-5f06-46d0-809b-da79f79c49e5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 23:59:08.461296   83900 system_pods.go:61] "etcd-embed-certs-825613" [3213929f-0200-4e7d-9b69-967463bfc194] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 23:59:08.461305   83900 system_pods.go:61] "kube-apiserver-embed-certs-825613" [d64b8c55-7da1-4b8e-864e-0727676b17a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 23:59:08.461313   83900 system_pods.go:61] "kube-controller-manager-embed-certs-825613" [61372146-3fe1-4f4e-9d8e-d85cc5ac8369] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 23:59:08.461322   83900 system_pods.go:61] "kube-proxy-rn6fg" [6db02558-bfa6-4c5f-a120-aed13575b273] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 23:59:08.461329   83900 system_pods.go:61] "kube-scheduler-embed-certs-825613" [b9c75ecd-add3-4a19-86e0-08326eea0f6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 23:59:08.461346   83900 system_pods.go:61] "metrics-server-6867b74b74-hg7c5" [2a657b1b-4435-42b5-aef2-deebf7865c83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 23:59:08.461358   83900 system_pods.go:61] "storage-provisioner" [e5cabae9-bb71-4b5e-9a43-0dce4a0733a1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 23:59:08.461368   83900 system_pods.go:74] duration metric: took 10.019045ms to wait for pod list to return data ...
	I1209 23:59:08.461381   83900 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:59:08.464945   83900 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 23:59:08.464973   83900 node_conditions.go:123] node cpu capacity is 2
	I1209 23:59:08.464986   83900 node_conditions.go:105] duration metric: took 3.600013ms to run NodePressure ...
	I1209 23:59:08.465011   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:08.762283   83900 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 23:59:08.766618   83900 kubeadm.go:739] kubelet initialised
	I1209 23:59:08.766640   83900 kubeadm.go:740] duration metric: took 4.332483ms waiting for restarted kubelet to initialise ...
	I1209 23:59:08.766648   83900 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:08.771039   83900 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.775795   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.775820   83900 pod_ready.go:82] duration metric: took 4.756744ms for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.775829   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.775836   83900 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.780473   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "etcd-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.780498   83900 pod_ready.go:82] duration metric: took 4.651756ms for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.780507   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "etcd-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.780517   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.785226   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.785252   83900 pod_ready.go:82] duration metric: took 4.725948ms for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.785261   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.785268   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.855086   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.855117   83900 pod_ready.go:82] duration metric: took 69.839948ms for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.855129   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.855141   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:09.255415   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-proxy-rn6fg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.255451   83900 pod_ready.go:82] duration metric: took 400.29383ms for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:09.255461   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-proxy-rn6fg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.255467   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:09.654750   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.654775   83900 pod_ready.go:82] duration metric: took 399.301549ms for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:09.654785   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.654792   83900 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:10.054952   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:10.054979   83900 pod_ready.go:82] duration metric: took 400.177675ms for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:10.054995   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:10.055002   83900 pod_ready.go:39] duration metric: took 1.288346997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:10.055019   83900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 23:59:10.066533   83900 ops.go:34] apiserver oom_adj: -16
	I1209 23:59:10.066559   83900 kubeadm.go:597] duration metric: took 9.298658158s to restartPrimaryControlPlane
	I1209 23:59:10.066570   83900 kubeadm.go:394] duration metric: took 9.350595042s to StartCluster
	I1209 23:59:10.066590   83900 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:10.066674   83900 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:59:10.068469   83900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:10.068732   83900 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:59:10.068795   83900 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 23:59:10.068901   83900 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-825613"
	I1209 23:59:10.068925   83900 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-825613"
	W1209 23:59:10.068932   83900 addons.go:243] addon storage-provisioner should already be in state true
	I1209 23:59:10.068966   83900 config.go:182] Loaded profile config "embed-certs-825613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:59:10.068960   83900 addons.go:69] Setting default-storageclass=true in profile "embed-certs-825613"
	I1209 23:59:10.068969   83900 host.go:66] Checking if "embed-certs-825613" exists ...
	I1209 23:59:10.069011   83900 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-825613"
	I1209 23:59:10.068972   83900 addons.go:69] Setting metrics-server=true in profile "embed-certs-825613"
	I1209 23:59:10.069584   83900 addons.go:234] Setting addon metrics-server=true in "embed-certs-825613"
	W1209 23:59:10.069616   83900 addons.go:243] addon metrics-server should already be in state true
	I1209 23:59:10.069644   83900 host.go:66] Checking if "embed-certs-825613" exists ...
	I1209 23:59:10.070176   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.070220   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.070219   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.070255   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.070947   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.071024   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.071039   83900 out.go:177] * Verifying Kubernetes components...
	I1209 23:59:10.073122   83900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:10.085793   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I1209 23:59:10.086012   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I1209 23:59:10.086282   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.086412   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.086794   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.086823   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.086907   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.086930   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.087156   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38521
	I1209 23:59:10.087160   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.087251   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.087469   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.087617   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.087792   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.087828   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.088155   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.088177   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.088692   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.089323   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.089358   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.090339   83900 addons.go:234] Setting addon default-storageclass=true in "embed-certs-825613"
	W1209 23:59:10.090357   83900 addons.go:243] addon default-storageclass should already be in state true
	I1209 23:59:10.090379   83900 host.go:66] Checking if "embed-certs-825613" exists ...
	I1209 23:59:10.090609   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.090639   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.103251   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44103
	I1209 23:59:10.103791   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.104305   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.104325   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.104376   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37713
	I1209 23:59:10.104560   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
	I1209 23:59:10.104713   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.104736   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.104848   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.104854   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.105269   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.105285   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.105393   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.105414   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.105595   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.105751   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.105784   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.106203   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.106235   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.107268   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:59:10.107710   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:59:10.109781   83900 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:10.109863   83900 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 23:59:10.111198   83900 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:59:10.111210   83900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 23:59:10.111224   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:59:10.111294   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 23:59:10.111300   83900 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 23:59:10.111309   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:59:10.114610   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.114929   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.114962   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:59:10.114980   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.115159   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:59:10.115318   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:59:10.115438   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:59:10.115474   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:59:10.115516   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.115599   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:59:10.115704   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:59:10.115844   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:59:10.115944   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:59:10.116149   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:59:10.123450   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1209 23:59:10.123926   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.124406   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.124445   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.124744   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.124936   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.126437   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:59:10.126619   83900 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 23:59:10.126637   83900 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 23:59:10.126656   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:59:10.129218   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.129663   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:59:10.129695   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.129874   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:59:10.130055   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:59:10.130164   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:59:10.130296   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:59:10.272711   83900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:10.290121   83900 node_ready.go:35] waiting up to 6m0s for node "embed-certs-825613" to be "Ready" ...
	I1209 23:59:10.379383   83900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:59:10.397672   83900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 23:59:10.403543   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 23:59:10.403591   83900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 23:59:10.439890   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 23:59:10.439915   83900 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 23:59:10.482300   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:59:10.482331   83900 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 23:59:10.550923   83900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:59:11.613572   83900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.234149831s)
	I1209 23:59:11.613622   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.613634   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.613655   83900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.21594498s)
	I1209 23:59:11.613701   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.613713   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.613929   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.613976   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.613992   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.613979   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.614006   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.614018   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.614032   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.614052   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.614000   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.614085   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.614332   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.614343   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.614343   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.614361   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.614362   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.614372   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.621731   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.621749   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.622001   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.622017   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.664217   83900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113253318s)
	I1209 23:59:11.664273   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.664288   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.664633   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.664637   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.664649   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.664658   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.664676   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.664875   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.664888   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.664903   83900 addons.go:475] Verifying addon metrics-server=true in "embed-certs-825613"
	I1209 23:59:11.666814   83900 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1209 23:59:09.899886   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:09.900284   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:09.900314   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:09.900262   85338 retry.go:31] will retry after 3.641244983s: waiting for machine to come up
	I1209 23:59:11.668002   83900 addons.go:510] duration metric: took 1.599215886s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1209 23:59:12.293475   83900 node_ready.go:53] node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:14.960350   84547 start.go:364] duration metric: took 3m49.343321897s to acquireMachinesLock for "old-k8s-version-720064"
	I1209 23:59:14.960428   84547 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:59:14.960440   84547 fix.go:54] fixHost starting: 
	I1209 23:59:14.960886   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:14.960950   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:14.981976   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37057
	I1209 23:59:14.982425   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:14.982933   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:59:14.982966   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:14.983341   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:14.983587   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:14.983772   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetState
	I1209 23:59:14.985748   84547 fix.go:112] recreateIfNeeded on old-k8s-version-720064: state=Stopped err=<nil>
	I1209 23:59:14.985774   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	W1209 23:59:14.985968   84547 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:59:14.987652   84547 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-720064" ...
	I1209 23:59:14.988869   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .Start
	I1209 23:59:14.989086   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring networks are active...
	I1209 23:59:14.989817   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring network default is active
	I1209 23:59:14.990188   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring network mk-old-k8s-version-720064 is active
	I1209 23:59:14.990640   84547 main.go:141] libmachine: (old-k8s-version-720064) Getting domain xml...
	I1209 23:59:14.991392   84547 main.go:141] libmachine: (old-k8s-version-720064) Creating domain...
	I1209 23:59:13.544971   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.545381   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Found IP for machine: 192.168.72.54
	I1209 23:59:13.545408   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has current primary IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.545418   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Reserving static IP address...
	I1209 23:59:13.545913   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-871210", mac: "52:54:00:5e:5b:a3", ip: "192.168.72.54"} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.545942   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | skip adding static IP to network mk-default-k8s-diff-port-871210 - found existing host DHCP lease matching {name: "default-k8s-diff-port-871210", mac: "52:54:00:5e:5b:a3", ip: "192.168.72.54"}
	I1209 23:59:13.545957   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Reserved static IP address: 192.168.72.54
	I1209 23:59:13.545973   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for SSH to be available...
	I1209 23:59:13.545988   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Getting to WaitForSSH function...
	I1209 23:59:13.548279   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.548641   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.548700   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.548788   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Using SSH client type: external
	I1209 23:59:13.548812   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa (-rw-------)
	I1209 23:59:13.548882   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:59:13.548908   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | About to run SSH command:
	I1209 23:59:13.548923   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | exit 0
	I1209 23:59:13.687704   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | SSH cmd err, output: <nil>: 
	I1209 23:59:13.688021   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetConfigRaw
	I1209 23:59:13.688673   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:13.690853   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.691187   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.691224   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.691421   84259 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/config.json ...
	I1209 23:59:13.691646   84259 machine.go:93] provisionDockerMachine start ...
	I1209 23:59:13.691665   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:13.691901   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:13.693958   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.694230   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.694257   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.694392   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:13.694602   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.694771   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.694913   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:13.695093   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:13.695278   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:13.695288   84259 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:59:13.803602   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:59:13.803629   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetMachineName
	I1209 23:59:13.803904   84259 buildroot.go:166] provisioning hostname "default-k8s-diff-port-871210"
	I1209 23:59:13.803928   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetMachineName
	I1209 23:59:13.804106   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:13.806369   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.806769   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.806800   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.806906   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:13.807053   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.807222   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.807329   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:13.807551   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:13.807751   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:13.807765   84259 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-871210 && echo "default-k8s-diff-port-871210" | sudo tee /etc/hostname
	I1209 23:59:13.932849   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-871210
	
	I1209 23:59:13.932874   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:13.935573   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.935943   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.935972   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.936143   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:13.936388   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.936546   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.936682   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:13.936840   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:13.937016   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:13.937046   84259 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-871210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-871210/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-871210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:59:14.057493   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:59:14.057526   84259 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:59:14.057565   84259 buildroot.go:174] setting up certificates
	I1209 23:59:14.057575   84259 provision.go:84] configureAuth start
	I1209 23:59:14.057586   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetMachineName
	I1209 23:59:14.057860   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:14.060619   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.060947   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.060973   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.061170   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.063591   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.063919   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.063946   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.064133   84259 provision.go:143] copyHostCerts
	I1209 23:59:14.064180   84259 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:59:14.064189   84259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:59:14.064242   84259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:59:14.064333   84259 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:59:14.064341   84259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:59:14.064364   84259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:59:14.064416   84259 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:59:14.064424   84259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:59:14.064442   84259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:59:14.064488   84259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-871210 san=[127.0.0.1 192.168.72.54 default-k8s-diff-port-871210 localhost minikube]
	I1209 23:59:14.332090   84259 provision.go:177] copyRemoteCerts
	I1209 23:59:14.332148   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:59:14.332174   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.334856   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.335275   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.335309   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.335428   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.335647   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.335860   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.336002   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:14.422641   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:59:14.445943   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1209 23:59:14.468188   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:59:14.490678   84259 provision.go:87] duration metric: took 433.06125ms to configureAuth
	I1209 23:59:14.490710   84259 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:59:14.490883   84259 config.go:182] Loaded profile config "default-k8s-diff-port-871210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:59:14.490953   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.493529   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.493858   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.493879   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.494073   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.494276   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.494435   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.494555   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.494716   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:14.494880   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:14.494899   84259 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:59:14.720424   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:59:14.720452   84259 machine.go:96] duration metric: took 1.028790944s to provisionDockerMachine
	I1209 23:59:14.720468   84259 start.go:293] postStartSetup for "default-k8s-diff-port-871210" (driver="kvm2")
	I1209 23:59:14.720480   84259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:59:14.720496   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.720814   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:59:14.720844   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.723417   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.723817   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.723851   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.723992   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.724171   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.724328   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.724453   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:14.810295   84259 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:59:14.814400   84259 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:59:14.814424   84259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:59:14.814501   84259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:59:14.814595   84259 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:59:14.814706   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:59:14.823765   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:14.847185   84259 start.go:296] duration metric: took 126.701807ms for postStartSetup
	I1209 23:59:14.847226   84259 fix.go:56] duration metric: took 19.803160459s for fixHost
	I1209 23:59:14.847245   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.850067   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.850488   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.850516   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.850697   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.850915   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.851082   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.851218   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.851369   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:14.851558   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:14.851588   84259 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:59:14.960167   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788754.933814431
	
	I1209 23:59:14.960190   84259 fix.go:216] guest clock: 1733788754.933814431
	I1209 23:59:14.960198   84259 fix.go:229] Guest: 2024-12-09 23:59:14.933814431 +0000 UTC Remote: 2024-12-09 23:59:14.847230309 +0000 UTC m=+262.142902203 (delta=86.584122ms)
	I1209 23:59:14.960217   84259 fix.go:200] guest clock delta is within tolerance: 86.584122ms
	I1209 23:59:14.960222   84259 start.go:83] releasing machines lock for "default-k8s-diff-port-871210", held for 19.916185392s
	I1209 23:59:14.960244   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.960544   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:14.963243   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.963653   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.963693   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.963820   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.964285   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.964505   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.964612   84259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:59:14.964689   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.964749   84259 ssh_runner.go:195] Run: cat /version.json
	I1209 23:59:14.964772   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.967430   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.967811   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.967844   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.967871   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.968018   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.968194   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.968388   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.968405   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.968427   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.968563   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.968562   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:14.968706   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.968878   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.969038   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:15.072098   84259 ssh_runner.go:195] Run: systemctl --version
	I1209 23:59:15.077846   84259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:59:15.222108   84259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:59:15.230013   84259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:59:15.230097   84259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:59:15.247065   84259 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:59:15.247091   84259 start.go:495] detecting cgroup driver to use...
	I1209 23:59:15.247168   84259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:59:15.268262   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:59:15.283824   84259 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:59:15.283893   84259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:59:15.299384   84259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:59:15.318727   84259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:59:15.457157   84259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:59:15.642629   84259 docker.go:233] disabling docker service ...
	I1209 23:59:15.642718   84259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:59:15.661646   84259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:59:15.681887   84259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:59:15.838344   84259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:59:15.971028   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:59:15.984896   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:59:16.006631   84259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:59:16.006691   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.019058   84259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:59:16.019143   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.031838   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.043176   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.057386   84259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:59:16.068694   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.083326   84259 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.102288   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.113538   84259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:59:16.126450   84259 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:59:16.126516   84259 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:59:16.147057   84259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:59:16.157329   84259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:16.285726   84259 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:59:16.385931   84259 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:59:16.386014   84259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:59:16.390840   84259 start.go:563] Will wait 60s for crictl version
	I1209 23:59:16.390912   84259 ssh_runner.go:195] Run: which crictl
	I1209 23:59:16.394870   84259 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:59:16.438893   84259 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:59:16.438986   84259 ssh_runner.go:195] Run: crio --version
	I1209 23:59:16.470118   84259 ssh_runner.go:195] Run: crio --version
	I1209 23:59:16.499603   84259 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:59:16.500665   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:16.503766   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:16.504242   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:16.504329   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:16.504609   84259 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1209 23:59:16.508742   84259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:16.520816   84259 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-871210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-871210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:59:16.520944   84259 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:59:16.520988   84259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:16.563220   84259 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:59:16.563315   84259 ssh_runner.go:195] Run: which lz4
	I1209 23:59:16.568962   84259 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:59:16.573715   84259 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:59:16.573756   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 23:59:14.294886   83900 node_ready.go:53] node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:16.796597   83900 node_ready.go:53] node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:17.795347   83900 node_ready.go:49] node "embed-certs-825613" has status "Ready":"True"
	I1209 23:59:17.795381   83900 node_ready.go:38] duration metric: took 7.505214814s for node "embed-certs-825613" to be "Ready" ...
	I1209 23:59:17.795394   83900 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:17.801650   83900 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:17.808087   83900 pod_ready.go:93] pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:17.808113   83900 pod_ready.go:82] duration metric: took 6.437717ms for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:17.808127   83900 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:16.350842   84547 main.go:141] libmachine: (old-k8s-version-720064) Waiting to get IP...
	I1209 23:59:16.351883   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.352326   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.352393   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.352304   85519 retry.go:31] will retry after 268.149849ms: waiting for machine to come up
	I1209 23:59:16.621742   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.622116   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.622145   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.622066   85519 retry.go:31] will retry after 365.051996ms: waiting for machine to come up
	I1209 23:59:16.988590   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.989124   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.989154   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.989089   85519 retry.go:31] will retry after 441.697933ms: waiting for machine to come up
	I1209 23:59:17.432962   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:17.433453   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:17.433482   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:17.433356   85519 retry.go:31] will retry after 503.173846ms: waiting for machine to come up
	I1209 23:59:17.938107   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:17.938576   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:17.938610   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:17.938533   85519 retry.go:31] will retry after 476.993358ms: waiting for machine to come up
	I1209 23:59:18.417462   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:18.418037   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:18.418064   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:18.417989   85519 retry.go:31] will retry after 732.449849ms: waiting for machine to come up
	I1209 23:59:19.152120   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:19.152680   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:19.152708   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:19.152624   85519 retry.go:31] will retry after 764.17794ms: waiting for machine to come up
	I1209 23:59:19.918630   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:19.919113   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:19.919141   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:19.919066   85519 retry.go:31] will retry after 1.072352346s: waiting for machine to come up
	I1209 23:59:17.907764   84259 crio.go:462] duration metric: took 1.338836821s to copy over tarball
	I1209 23:59:17.907846   84259 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:59:20.133171   84259 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.22529043s)
	I1209 23:59:20.133214   84259 crio.go:469] duration metric: took 2.225420822s to extract the tarball
	I1209 23:59:20.133224   84259 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:59:20.169786   84259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:20.211990   84259 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:59:20.212020   84259 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:59:20.212030   84259 kubeadm.go:934] updating node { 192.168.72.54 8444 v1.31.2 crio true true} ...
	I1209 23:59:20.212166   84259 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-871210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-871210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:59:20.212247   84259 ssh_runner.go:195] Run: crio config
	I1209 23:59:20.255141   84259 cni.go:84] Creating CNI manager for ""
	I1209 23:59:20.255169   84259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:20.255182   84259 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:59:20.255215   84259 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.54 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-871210 NodeName:default-k8s-diff-port-871210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:59:20.255338   84259 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.54
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-871210"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.54"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.54"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:59:20.255407   84259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:59:20.265160   84259 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:59:20.265244   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:59:20.275799   84259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1209 23:59:20.292089   84259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:59:20.308054   84259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1209 23:59:20.327464   84259 ssh_runner.go:195] Run: grep 192.168.72.54	control-plane.minikube.internal$ /etc/hosts
	I1209 23:59:20.331101   84259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:20.343232   84259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:20.473225   84259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:20.492069   84259 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210 for IP: 192.168.72.54
	I1209 23:59:20.492098   84259 certs.go:194] generating shared ca certs ...
	I1209 23:59:20.492119   84259 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:20.492311   84259 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:59:20.492363   84259 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:59:20.492378   84259 certs.go:256] generating profile certs ...
	I1209 23:59:20.492499   84259 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/client.key
	I1209 23:59:20.492605   84259 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/apiserver.key.9cc284f2
	I1209 23:59:20.492663   84259 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/proxy-client.key
	I1209 23:59:20.492831   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:59:20.492872   84259 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:59:20.492886   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:59:20.492918   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:59:20.492951   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:59:20.492997   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:59:20.493071   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:20.494023   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:59:20.543769   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:59:20.583697   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:59:20.615170   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:59:20.638784   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1209 23:59:20.673708   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 23:59:20.696690   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:59:20.719682   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:59:20.744292   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:59:20.767643   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:59:20.790927   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:59:20.816746   84259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:59:20.834150   84259 ssh_runner.go:195] Run: openssl version
	I1209 23:59:20.840153   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:59:20.851006   84259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:59:20.855430   84259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:59:20.855492   84259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:59:20.860866   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:59:20.871983   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:59:20.882642   84259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:20.886978   84259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:20.887050   84259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:20.892472   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:59:20.902959   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:59:20.913774   84259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:59:20.918344   84259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:59:20.918394   84259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:59:20.923981   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:59:20.934931   84259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:59:20.939662   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:59:20.945633   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:59:20.951650   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:59:20.957628   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:59:20.963600   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:59:20.969399   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:59:20.974992   84259 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-871210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-871210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:20.975088   84259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:59:20.975143   84259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:21.012553   84259 cri.go:89] found id: ""
	I1209 23:59:21.012647   84259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:59:21.023728   84259 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:59:21.023758   84259 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:59:21.023808   84259 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:59:21.033788   84259 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:59:21.034806   84259 kubeconfig.go:125] found "default-k8s-diff-port-871210" server: "https://192.168.72.54:8444"
	I1209 23:59:21.036852   84259 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:59:21.046251   84259 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.54
	I1209 23:59:21.046280   84259 kubeadm.go:1160] stopping kube-system containers ...
	I1209 23:59:21.046292   84259 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 23:59:21.046354   84259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:21.084915   84259 cri.go:89] found id: ""
	I1209 23:59:21.085000   84259 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 23:59:21.100592   84259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:59:21.111180   84259 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:59:21.111204   84259 kubeadm.go:157] found existing configuration files:
	
	I1209 23:59:21.111254   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1209 23:59:21.120027   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:59:21.120087   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:59:21.129165   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1209 23:59:21.138254   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:59:21.138315   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:59:21.147276   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1209 23:59:21.155635   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:59:21.155711   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:59:21.164794   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1209 23:59:21.173421   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:59:21.173477   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:59:21.182615   84259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:59:21.191795   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:21.289295   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.415644   84259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.126305379s)
	I1209 23:59:22.415695   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.616211   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.672571   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.731282   84259 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:59:22.731363   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:19.816966   83900 pod_ready.go:103] pod "etcd-embed-certs-825613" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:21.814625   83900 pod_ready.go:93] pod "etcd-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.814650   83900 pod_ready.go:82] duration metric: took 4.006513871s for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.814662   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.819611   83900 pod_ready.go:93] pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.819634   83900 pod_ready.go:82] duration metric: took 4.964081ms for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.819647   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.823994   83900 pod_ready.go:93] pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.824016   83900 pod_ready.go:82] duration metric: took 4.360902ms for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.824028   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.829102   83900 pod_ready.go:93] pod "kube-proxy-rn6fg" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.829128   83900 pod_ready.go:82] duration metric: took 5.090595ms for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.829161   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:23.983548   83900 pod_ready.go:93] pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:23.983603   83900 pod_ready.go:82] duration metric: took 2.154427639s for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:23.983620   83900 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:20.992747   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:20.993138   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:20.993196   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:20.993111   85519 retry.go:31] will retry after 1.479639772s: waiting for machine to come up
	I1209 23:59:22.474889   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:22.475414   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:22.475445   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:22.475364   85519 retry.go:31] will retry after 2.248463233s: waiting for machine to come up
	I1209 23:59:24.725307   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:24.725765   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:24.725796   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:24.725706   85519 retry.go:31] will retry after 2.803512929s: waiting for machine to come up
	I1209 23:59:23.231575   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:23.731704   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:24.231740   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:24.265107   84259 api_server.go:72] duration metric: took 1.533824471s to wait for apiserver process to appear ...
	I1209 23:59:24.265144   84259 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:59:24.265173   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:24.265664   84259 api_server.go:269] stopped: https://192.168.72.54:8444/healthz: Get "https://192.168.72.54:8444/healthz": dial tcp 192.168.72.54:8444: connect: connection refused
	I1209 23:59:24.765571   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.251990   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:27.252018   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:27.252033   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.284733   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:27.284769   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:27.284781   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.313967   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:27.313998   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:25.990720   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:28.490332   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:27.530567   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:27.531086   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:27.531120   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:27.531022   85519 retry.go:31] will retry after 2.800227475s: waiting for machine to come up
	I1209 23:59:30.333201   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:30.333660   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:30.333684   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:30.333621   85519 retry.go:31] will retry after 3.041733113s: waiting for machine to come up
	I1209 23:59:27.765806   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.773528   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:27.773564   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:28.266012   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:28.275940   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:28.275965   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:28.765502   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:28.770695   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:28.770725   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:29.265309   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:29.270844   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:29.270883   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:29.765323   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:29.769949   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:29.769974   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:30.265981   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:30.270284   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:30.270313   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:30.765915   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:30.771542   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I1209 23:59:30.781357   84259 api_server.go:141] control plane version: v1.31.2
	I1209 23:59:30.781390   84259 api_server.go:131] duration metric: took 6.516238077s to wait for apiserver health ...
	I1209 23:59:30.781400   84259 cni.go:84] Creating CNI manager for ""
	I1209 23:59:30.781409   84259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:30.783438   84259 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 23:59:30.784794   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 23:59:30.794916   84259 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 23:59:30.812099   84259 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:59:30.822915   84259 system_pods.go:59] 8 kube-system pods found
	I1209 23:59:30.822967   84259 system_pods.go:61] "coredns-7c65d6cfc9-wclgl" [22b2e24e-5a03-4d4f-a071-68e414aaf6cf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 23:59:30.822980   84259 system_pods.go:61] "etcd-default-k8s-diff-port-871210" [2058a33f-dd4b-42bc-94d9-4cd130b9389c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 23:59:30.822990   84259 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-871210" [26ffd01d-48b5-4e38-a91b-bf75dd75e3c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 23:59:30.823000   84259 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-871210" [cea3a810-c7e9-48d6-9fd7-1f6258c45387] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 23:59:30.823006   84259 system_pods.go:61] "kube-proxy-d7sxm" [e28ccb5f-a282-4371-ae1a-fca52dd58616] Running
	I1209 23:59:30.823021   84259 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-871210" [790619f6-29a2-4bbc-89ba-a7b93653459b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 23:59:30.823033   84259 system_pods.go:61] "metrics-server-6867b74b74-lgzdz" [2c251249-58b6-44c4-929a-0f5c963d83b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 23:59:30.823039   84259 system_pods.go:61] "storage-provisioner" [63078ee5-81b8-4548-88bf-2b89857ea01a] Running
	I1209 23:59:30.823050   84259 system_pods.go:74] duration metric: took 10.914419ms to wait for pod list to return data ...
	I1209 23:59:30.823063   84259 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:59:30.828012   84259 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 23:59:30.828062   84259 node_conditions.go:123] node cpu capacity is 2
	I1209 23:59:30.828076   84259 node_conditions.go:105] duration metric: took 5.004481ms to run NodePressure ...
	I1209 23:59:30.828097   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:31.092839   84259 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 23:59:31.097588   84259 kubeadm.go:739] kubelet initialised
	I1209 23:59:31.097613   84259 kubeadm.go:740] duration metric: took 4.744115ms waiting for restarted kubelet to initialise ...
	I1209 23:59:31.097623   84259 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:31.104318   84259 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:30.991511   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:33.490390   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:34.824186   83859 start.go:364] duration metric: took 55.338760885s to acquireMachinesLock for "no-preload-048296"
	I1209 23:59:34.824245   83859 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:59:34.824272   83859 fix.go:54] fixHost starting: 
	I1209 23:59:34.824660   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:34.824713   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:34.842421   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35591
	I1209 23:59:34.842851   83859 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:34.843297   83859 main.go:141] libmachine: Using API Version  1
	I1209 23:59:34.843319   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:34.843701   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:34.843916   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:34.844066   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1209 23:59:34.845768   83859 fix.go:112] recreateIfNeeded on no-preload-048296: state=Stopped err=<nil>
	I1209 23:59:34.845792   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	W1209 23:59:34.845958   83859 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:59:34.847617   83859 out.go:177] * Restarting existing kvm2 VM for "no-preload-048296" ...
	I1209 23:59:33.378848   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.379297   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has current primary IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.379326   84547 main.go:141] libmachine: (old-k8s-version-720064) Found IP for machine: 192.168.39.188
	I1209 23:59:33.379351   84547 main.go:141] libmachine: (old-k8s-version-720064) Reserving static IP address...
	I1209 23:59:33.379701   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "old-k8s-version-720064", mac: "52:54:00:a1:91:00", ip: "192.168.39.188"} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.379722   84547 main.go:141] libmachine: (old-k8s-version-720064) Reserved static IP address: 192.168.39.188
	I1209 23:59:33.379743   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | skip adding static IP to network mk-old-k8s-version-720064 - found existing host DHCP lease matching {name: "old-k8s-version-720064", mac: "52:54:00:a1:91:00", ip: "192.168.39.188"}
	I1209 23:59:33.379759   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Getting to WaitForSSH function...
	I1209 23:59:33.379776   84547 main.go:141] libmachine: (old-k8s-version-720064) Waiting for SSH to be available...
	I1209 23:59:33.381697   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.381990   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.382027   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.382137   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Using SSH client type: external
	I1209 23:59:33.382163   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa (-rw-------)
	I1209 23:59:33.382193   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:59:33.382212   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | About to run SSH command:
	I1209 23:59:33.382229   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | exit 0
	I1209 23:59:33.507653   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | SSH cmd err, output: <nil>: 
	I1209 23:59:33.507957   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetConfigRaw
	I1209 23:59:33.508555   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:33.511020   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.511399   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.511431   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.511695   84547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/config.json ...
	I1209 23:59:33.511932   84547 machine.go:93] provisionDockerMachine start ...
	I1209 23:59:33.511958   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:33.512144   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.514383   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.514722   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.514743   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.514876   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.515048   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.515187   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.515349   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.515596   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.515835   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.515850   84547 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:59:33.623370   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:59:33.623407   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.623670   84547 buildroot.go:166] provisioning hostname "old-k8s-version-720064"
	I1209 23:59:33.623708   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.623909   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.626370   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.626749   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.626781   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.626943   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.627172   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.627348   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.627490   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.627666   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.627868   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.627884   84547 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-720064 && echo "old-k8s-version-720064" | sudo tee /etc/hostname
	I1209 23:59:33.749338   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-720064
	
	I1209 23:59:33.749370   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.752401   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.752774   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.752807   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.753000   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.753180   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.753372   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.753531   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.753674   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.753837   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.753853   84547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-720064' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-720064/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-720064' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:59:33.872512   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:59:33.872548   84547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:59:33.872575   84547 buildroot.go:174] setting up certificates
	I1209 23:59:33.872585   84547 provision.go:84] configureAuth start
	I1209 23:59:33.872597   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.872906   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:33.875719   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.876012   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.876051   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.876210   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.878539   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.878857   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.878894   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.879025   84547 provision.go:143] copyHostCerts
	I1209 23:59:33.879087   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:59:33.879100   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:59:33.879166   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:59:33.879254   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:59:33.879262   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:59:33.879283   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:59:33.879338   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:59:33.879346   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:59:33.879365   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:59:33.879414   84547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-720064 san=[127.0.0.1 192.168.39.188 localhost minikube old-k8s-version-720064]
	I1209 23:59:34.211417   84547 provision.go:177] copyRemoteCerts
	I1209 23:59:34.211475   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:59:34.211504   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.214285   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.214649   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.214686   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.214789   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.215006   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.215180   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.215345   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.301399   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:59:34.326216   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:59:34.349136   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 23:59:34.371597   84547 provision.go:87] duration metric: took 498.999263ms to configureAuth
	I1209 23:59:34.371633   84547 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:59:34.371832   84547 config.go:182] Loaded profile config "old-k8s-version-720064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 23:59:34.371902   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.374649   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.374985   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.375031   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.375161   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.375368   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.375541   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.375735   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.375897   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:34.376128   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:34.376152   84547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:59:34.588357   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:59:34.588391   84547 machine.go:96] duration metric: took 1.076442399s to provisionDockerMachine
	I1209 23:59:34.588402   84547 start.go:293] postStartSetup for "old-k8s-version-720064" (driver="kvm2")
	I1209 23:59:34.588412   84547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:59:34.588443   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.588749   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:59:34.588772   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.591413   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.591805   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.591836   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.592001   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.592176   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.592325   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.592431   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.673445   84547 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:59:34.677394   84547 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:59:34.677421   84547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:59:34.677498   84547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:59:34.677600   84547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:59:34.677716   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:59:34.686261   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:34.709412   84547 start.go:296] duration metric: took 120.993275ms for postStartSetup
	I1209 23:59:34.709467   84547 fix.go:56] duration metric: took 19.749026723s for fixHost
	I1209 23:59:34.709495   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.712458   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.712762   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.712796   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.712930   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.713158   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.713335   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.713506   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.713690   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:34.713852   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:34.713862   84547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:59:34.823993   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788774.778513264
	
	I1209 23:59:34.824022   84547 fix.go:216] guest clock: 1733788774.778513264
	I1209 23:59:34.824034   84547 fix.go:229] Guest: 2024-12-09 23:59:34.778513264 +0000 UTC Remote: 2024-12-09 23:59:34.709473288 +0000 UTC m=+249.236536627 (delta=69.039976ms)
	I1209 23:59:34.824067   84547 fix.go:200] guest clock delta is within tolerance: 69.039976ms
	I1209 23:59:34.824075   84547 start.go:83] releasing machines lock for "old-k8s-version-720064", held for 19.863672525s
	I1209 23:59:34.824110   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.824391   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:34.827165   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.827596   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.827629   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.827894   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828467   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828690   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828802   84547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:59:34.828865   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.828888   84547 ssh_runner.go:195] Run: cat /version.json
	I1209 23:59:34.828917   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.831978   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832052   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832514   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.832548   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.832572   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832748   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832751   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.832899   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.832982   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.832986   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.833198   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.833217   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.833370   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.833374   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.949044   84547 ssh_runner.go:195] Run: systemctl --version
	I1209 23:59:34.954999   84547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:59:35.100096   84547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:59:35.105764   84547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:59:35.105852   84547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:59:35.126893   84547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:59:35.126915   84547 start.go:495] detecting cgroup driver to use...
	I1209 23:59:35.126968   84547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:59:35.143236   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:59:35.157019   84547 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:59:35.157098   84547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:59:35.170608   84547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:59:35.184816   84547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:59:35.310043   84547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:59:35.486472   84547 docker.go:233] disabling docker service ...
	I1209 23:59:35.486653   84547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:59:35.501938   84547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:59:35.516997   84547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:59:35.667304   84547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:59:35.821316   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:59:35.836423   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:59:35.854633   84547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1209 23:59:35.854719   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.865592   84547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:59:35.865677   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.877247   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.889007   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.900196   84547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:59:35.912403   84547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:59:35.922919   84547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:59:35.922987   84547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:59:35.939439   84547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:59:35.949715   84547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:36.100115   84547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:59:36.207129   84547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:59:36.207206   84547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:59:36.213672   84547 start.go:563] Will wait 60s for crictl version
	I1209 23:59:36.213758   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:36.218204   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:59:36.255262   84547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:59:36.255367   84547 ssh_runner.go:195] Run: crio --version
	I1209 23:59:36.286277   84547 ssh_runner.go:195] Run: crio --version
	I1209 23:59:36.317334   84547 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1209 23:59:34.848733   83859 main.go:141] libmachine: (no-preload-048296) Calling .Start
	I1209 23:59:34.848943   83859 main.go:141] libmachine: (no-preload-048296) Ensuring networks are active...
	I1209 23:59:34.849641   83859 main.go:141] libmachine: (no-preload-048296) Ensuring network default is active
	I1209 23:59:34.850039   83859 main.go:141] libmachine: (no-preload-048296) Ensuring network mk-no-preload-048296 is active
	I1209 23:59:34.850540   83859 main.go:141] libmachine: (no-preload-048296) Getting domain xml...
	I1209 23:59:34.851224   83859 main.go:141] libmachine: (no-preload-048296) Creating domain...
	I1209 23:59:36.270761   83859 main.go:141] libmachine: (no-preload-048296) Waiting to get IP...
	I1209 23:59:36.271970   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:36.272578   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:36.272645   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:36.272536   85657 retry.go:31] will retry after 253.181092ms: waiting for machine to come up
	I1209 23:59:36.527128   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:36.527735   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:36.527762   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:36.527683   85657 retry.go:31] will retry after 297.608817ms: waiting for machine to come up
	I1209 23:59:36.827372   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:36.828819   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:36.828849   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:36.828763   85657 retry.go:31] will retry after 374.112777ms: waiting for machine to come up
	I1209 23:59:33.112105   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:35.611353   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:36.115472   84259 pod_ready.go:93] pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:36.115504   84259 pod_ready.go:82] duration metric: took 5.011153637s for pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:36.115521   84259 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:35.492415   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:37.992287   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:36.318651   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:36.321927   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:36.322336   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:36.322366   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:36.322619   84547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 23:59:36.327938   84547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:36.344152   84547 kubeadm.go:883] updating cluster {Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:59:36.344279   84547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:59:36.344328   84547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:36.391261   84547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 23:59:36.391348   84547 ssh_runner.go:195] Run: which lz4
	I1209 23:59:36.395391   84547 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:59:36.399585   84547 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:59:36.399622   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1209 23:59:37.969251   84547 crio.go:462] duration metric: took 1.573891158s to copy over tarball
	I1209 23:59:37.969362   84547 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:59:37.204687   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:37.205458   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:37.205479   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:37.205372   85657 retry.go:31] will retry after 420.490569ms: waiting for machine to come up
	I1209 23:59:37.627167   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:37.629813   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:37.629839   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:37.629687   85657 retry.go:31] will retry after 561.795207ms: waiting for machine to come up
	I1209 23:59:38.193652   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:38.194178   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:38.194200   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:38.194119   85657 retry.go:31] will retry after 925.018936ms: waiting for machine to come up
	I1209 23:59:39.121476   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:39.122014   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:39.122046   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:39.121958   85657 retry.go:31] will retry after 984.547761ms: waiting for machine to come up
	I1209 23:59:40.108478   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:40.108947   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:40.108981   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:40.108925   85657 retry.go:31] will retry after 1.261498912s: waiting for machine to come up
	I1209 23:59:41.372403   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:41.372901   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:41.372922   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:41.372884   85657 retry.go:31] will retry after 1.498283156s: waiting for machine to come up
	I1209 23:59:38.122964   84259 pod_ready.go:93] pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:38.122990   84259 pod_ready.go:82] duration metric: took 2.007459606s for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:38.123004   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:40.133257   84259 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:40.631911   84259 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:40.631940   84259 pod_ready.go:82] duration metric: took 2.508926956s for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:40.631957   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:40.492044   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:42.991179   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:41.050494   84547 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081073233s)
	I1209 23:59:41.050524   84547 crio.go:469] duration metric: took 3.081237568s to extract the tarball
	I1209 23:59:41.050533   84547 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:59:41.092694   84547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:41.125264   84547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 23:59:41.125289   84547 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 23:59:41.125360   84547 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.125378   84547 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.125388   84547 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.125360   84547 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.125436   84547 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1209 23:59:41.125466   84547 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.125497   84547 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.125515   84547 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.127285   84547 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.127325   84547 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1209 23:59:41.127376   84547 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.127405   84547 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.127421   84547 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.127293   84547 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.127293   84547 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.127300   84547 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.302277   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.314265   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1209 23:59:41.323683   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.326327   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.345042   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.345550   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.362324   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.367612   84547 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1209 23:59:41.367663   84547 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.367706   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.406644   84547 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1209 23:59:41.406691   84547 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1209 23:59:41.406783   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.450490   84547 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1209 23:59:41.450550   84547 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.450601   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.477332   84547 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1209 23:59:41.477389   84547 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.477438   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.499714   84547 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1209 23:59:41.499757   84547 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.499783   84547 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1209 23:59:41.499806   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.499818   84547 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.499861   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.505128   84547 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1209 23:59:41.505179   84547 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.505205   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.505249   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.505212   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.505306   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.505342   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.507860   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.507923   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.643108   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.644251   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.644273   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.644358   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.644400   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.650732   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.650817   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.777981   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.789388   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.789435   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.789491   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.789561   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.789592   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.802784   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.912370   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.914494   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1209 23:59:41.944460   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1209 23:59:41.944564   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1209 23:59:41.944569   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.944660   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1209 23:59:41.944662   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1209 23:59:41.944726   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1209 23:59:42.104003   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1209 23:59:42.104074   84547 cache_images.go:92] duration metric: took 978.7734ms to LoadCachedImages
	W1209 23:59:42.104176   84547 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1209 23:59:42.104198   84547 kubeadm.go:934] updating node { 192.168.39.188 8443 v1.20.0 crio true true} ...
	I1209 23:59:42.104326   84547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-720064 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.188
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:59:42.104431   84547 ssh_runner.go:195] Run: crio config
	I1209 23:59:42.154016   84547 cni.go:84] Creating CNI manager for ""
	I1209 23:59:42.154041   84547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:42.154050   84547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:59:42.154066   84547 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.188 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-720064 NodeName:old-k8s-version-720064 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.188"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.188 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 23:59:42.154226   84547 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.188
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-720064"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.188
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.188"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:59:42.154292   84547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 23:59:42.164866   84547 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:59:42.164943   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:59:42.175137   84547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1209 23:59:42.192176   84547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:59:42.210695   84547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1209 23:59:42.230364   84547 ssh_runner.go:195] Run: grep 192.168.39.188	control-plane.minikube.internal$ /etc/hosts
	I1209 23:59:42.234239   84547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.188	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:42.246747   84547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:42.379643   84547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:42.396626   84547 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064 for IP: 192.168.39.188
	I1209 23:59:42.396713   84547 certs.go:194] generating shared ca certs ...
	I1209 23:59:42.396746   84547 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:42.396942   84547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:59:42.397007   84547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:59:42.397023   84547 certs.go:256] generating profile certs ...
	I1209 23:59:42.397191   84547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/client.key
	I1209 23:59:42.397270   84547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key.abcbe3fa
	I1209 23:59:42.397330   84547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.key
	I1209 23:59:42.397516   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:59:42.397564   84547 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:59:42.397580   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:59:42.397623   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:59:42.397662   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:59:42.397701   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:59:42.397768   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:42.398726   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:59:42.450053   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:59:42.475685   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:59:42.514396   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:59:42.562383   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 23:59:42.589690   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 23:59:42.614149   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:59:42.647112   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:59:42.671117   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:59:42.694303   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:59:42.722563   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:59:42.753478   84547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:59:42.773673   84547 ssh_runner.go:195] Run: openssl version
	I1209 23:59:42.779930   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:59:42.791531   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.796422   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.796536   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.802386   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:59:42.817058   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:59:42.828570   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.833292   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.833357   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.839679   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:59:42.850729   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:59:42.861699   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.866417   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.866479   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.873602   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:59:42.884398   84547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:59:42.889137   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:59:42.896108   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:59:42.902584   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:59:42.908706   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:59:42.914313   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:59:42.919943   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:59:42.925417   84547 kubeadm.go:392] StartCluster: {Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:42.925546   84547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:59:42.925602   84547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:42.962948   84547 cri.go:89] found id: ""
	I1209 23:59:42.963030   84547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:59:42.973746   84547 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:59:42.973768   84547 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:59:42.973819   84547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:59:42.983593   84547 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:59:42.984528   84547 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-720064" does not appear in /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:59:42.985106   84547 kubeconfig.go:62] /home/jenkins/minikube-integration/19888-18950/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-720064" cluster setting kubeconfig missing "old-k8s-version-720064" context setting]
	I1209 23:59:42.985981   84547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:42.996842   84547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:59:43.007858   84547 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.188
	I1209 23:59:43.007901   84547 kubeadm.go:1160] stopping kube-system containers ...
	I1209 23:59:43.007916   84547 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 23:59:43.007982   84547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:43.046416   84547 cri.go:89] found id: ""
	I1209 23:59:43.046495   84547 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 23:59:43.063226   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:59:43.073919   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:59:43.073946   84547 kubeadm.go:157] found existing configuration files:
	
	I1209 23:59:43.073991   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:59:43.084217   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:59:43.084292   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:59:43.094196   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:59:43.104098   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:59:43.104178   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:59:43.115056   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:59:43.124696   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:59:43.124761   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:59:43.135180   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:59:43.146325   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:59:43.146392   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:59:43.156017   84547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:59:43.166584   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:43.376941   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.198180   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.431002   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.549010   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.644736   84547 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:59:44.644827   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:45.145713   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:42.872724   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:42.873229   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:42.873260   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:42.873178   85657 retry.go:31] will retry after 1.469240763s: waiting for machine to come up
	I1209 23:59:44.344825   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:44.345339   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:44.345374   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:44.345317   85657 retry.go:31] will retry after 1.85673524s: waiting for machine to come up
	I1209 23:59:46.203693   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:46.204128   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:46.204152   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:46.204065   85657 retry.go:31] will retry after 3.5180616s: waiting for machine to come up
	I1209 23:59:43.003893   84259 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:43.734778   84259 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:43.734806   84259 pod_ready.go:82] duration metric: took 3.102839103s for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:43.734821   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-d7sxm" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:43.743393   84259 pod_ready.go:93] pod "kube-proxy-d7sxm" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:43.743421   84259 pod_ready.go:82] duration metric: took 8.592191ms for pod "kube-proxy-d7sxm" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:43.743435   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:44.250028   84259 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:44.250051   84259 pod_ready.go:82] duration metric: took 506.607784ms for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:44.250063   84259 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:46.256096   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:45.494189   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:47.993074   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:45.645297   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:46.145300   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:46.644880   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:47.144955   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:47.645081   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:48.145132   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:48.645278   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.144932   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.645874   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:50.145061   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.725528   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:49.726040   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:49.726069   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:49.725982   85657 retry.go:31] will retry after 3.98915487s: waiting for machine to come up
	I1209 23:59:48.256397   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:50.757098   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:50.491110   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:52.989722   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:50.645171   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:51.144908   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:51.645198   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:52.145700   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:52.645665   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.145782   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.645678   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:54.145521   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:54.645825   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:55.145578   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.718634   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.719141   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has current primary IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.719163   83859 main.go:141] libmachine: (no-preload-048296) Found IP for machine: 192.168.61.182
	I1209 23:59:53.719173   83859 main.go:141] libmachine: (no-preload-048296) Reserving static IP address...
	I1209 23:59:53.719594   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "no-preload-048296", mac: "52:54:00:c6:cf:c7", ip: "192.168.61.182"} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.719621   83859 main.go:141] libmachine: (no-preload-048296) DBG | skip adding static IP to network mk-no-preload-048296 - found existing host DHCP lease matching {name: "no-preload-048296", mac: "52:54:00:c6:cf:c7", ip: "192.168.61.182"}
	I1209 23:59:53.719632   83859 main.go:141] libmachine: (no-preload-048296) Reserved static IP address: 192.168.61.182
	I1209 23:59:53.719641   83859 main.go:141] libmachine: (no-preload-048296) Waiting for SSH to be available...
	I1209 23:59:53.719671   83859 main.go:141] libmachine: (no-preload-048296) DBG | Getting to WaitForSSH function...
	I1209 23:59:53.722015   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.722309   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.722342   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.722445   83859 main.go:141] libmachine: (no-preload-048296) DBG | Using SSH client type: external
	I1209 23:59:53.722471   83859 main.go:141] libmachine: (no-preload-048296) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa (-rw-------)
	I1209 23:59:53.722504   83859 main.go:141] libmachine: (no-preload-048296) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:59:53.722517   83859 main.go:141] libmachine: (no-preload-048296) DBG | About to run SSH command:
	I1209 23:59:53.722529   83859 main.go:141] libmachine: (no-preload-048296) DBG | exit 0
	I1209 23:59:53.843261   83859 main.go:141] libmachine: (no-preload-048296) DBG | SSH cmd err, output: <nil>: 
	I1209 23:59:53.843673   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetConfigRaw
	I1209 23:59:53.844367   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:53.846889   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.847170   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.847203   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.847433   83859 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/config.json ...
	I1209 23:59:53.847656   83859 machine.go:93] provisionDockerMachine start ...
	I1209 23:59:53.847675   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:53.847834   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:53.849867   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.850179   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.850213   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.850372   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:53.850548   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.850702   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.850878   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:53.851058   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:53.851288   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:53.851303   83859 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:59:53.947889   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:59:53.947916   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:59:53.948220   83859 buildroot.go:166] provisioning hostname "no-preload-048296"
	I1209 23:59:53.948259   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:59:53.948496   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:53.951070   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.951479   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.951509   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.951729   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:53.951919   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.952120   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.952280   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:53.952456   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:53.952650   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:53.952672   83859 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-048296 && echo "no-preload-048296" | sudo tee /etc/hostname
	I1209 23:59:54.065706   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-048296
	
	I1209 23:59:54.065738   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.068629   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.068942   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.068975   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.069241   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.069493   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.069731   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.069939   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.070156   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:54.070325   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:54.070346   83859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-048296' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-048296/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-048296' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:59:54.180424   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:59:54.180460   83859 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:59:54.180479   83859 buildroot.go:174] setting up certificates
	I1209 23:59:54.180490   83859 provision.go:84] configureAuth start
	I1209 23:59:54.180499   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:59:54.180802   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:54.183349   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.183703   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.183732   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.183852   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.186076   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.186341   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.186372   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.186498   83859 provision.go:143] copyHostCerts
	I1209 23:59:54.186554   83859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:59:54.186564   83859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:59:54.186627   83859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:59:54.186715   83859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:59:54.186723   83859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:59:54.186744   83859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:59:54.186805   83859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:59:54.186814   83859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:59:54.186831   83859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:59:54.186881   83859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.no-preload-048296 san=[127.0.0.1 192.168.61.182 localhost minikube no-preload-048296]
	I1209 23:59:54.291230   83859 provision.go:177] copyRemoteCerts
	I1209 23:59:54.291296   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:59:54.291319   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.293968   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.294257   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.294283   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.294451   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.294654   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.294814   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.294963   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.377572   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:59:54.401222   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 23:59:54.425323   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 23:59:54.448358   83859 provision.go:87] duration metric: took 267.838318ms to configureAuth
	I1209 23:59:54.448387   83859 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:59:54.448563   83859 config.go:182] Loaded profile config "no-preload-048296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:59:54.448629   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.451082   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.451422   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.451450   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.451678   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.451906   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.452088   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.452222   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.452374   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:54.452550   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:54.452567   83859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:59:54.665629   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:59:54.665663   83859 machine.go:96] duration metric: took 817.991465ms to provisionDockerMachine
	I1209 23:59:54.665677   83859 start.go:293] postStartSetup for "no-preload-048296" (driver="kvm2")
	I1209 23:59:54.665690   83859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:59:54.665712   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.666016   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:59:54.666043   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.668993   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.669501   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.669532   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.669666   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.669859   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.670004   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.670160   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.752122   83859 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:59:54.756546   83859 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:59:54.756569   83859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:59:54.756640   83859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:59:54.756706   83859 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:59:54.756797   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:59:54.766465   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:54.792214   83859 start.go:296] duration metric: took 126.521539ms for postStartSetup
	I1209 23:59:54.792263   83859 fix.go:56] duration metric: took 19.967992145s for fixHost
	I1209 23:59:54.792284   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.794921   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.795304   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.795334   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.795549   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.795807   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.795986   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.796154   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.796333   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:54.796485   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:54.796495   83859 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:59:54.900382   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788794.870799171
	
	I1209 23:59:54.900413   83859 fix.go:216] guest clock: 1733788794.870799171
	I1209 23:59:54.900423   83859 fix.go:229] Guest: 2024-12-09 23:59:54.870799171 +0000 UTC Remote: 2024-12-09 23:59:54.792267927 +0000 UTC m=+357.892420200 (delta=78.531244ms)
	I1209 23:59:54.900443   83859 fix.go:200] guest clock delta is within tolerance: 78.531244ms
	I1209 23:59:54.900448   83859 start.go:83] releasing machines lock for "no-preload-048296", held for 20.076230285s
	I1209 23:59:54.900466   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.900763   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:54.903437   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.903762   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.903785   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.903941   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.904412   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.904583   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.904674   83859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:59:54.904735   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.904788   83859 ssh_runner.go:195] Run: cat /version.json
	I1209 23:59:54.904815   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.907540   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.907693   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.907936   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.907960   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.908083   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.908098   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.908108   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.908269   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.908332   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.908431   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.908578   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.908607   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.908750   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.908753   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.980588   83859 ssh_runner.go:195] Run: systemctl --version
	I1209 23:59:55.003079   83859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:59:55.152159   83859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:59:55.158212   83859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:59:55.158284   83859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:59:55.177510   83859 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:59:55.177539   83859 start.go:495] detecting cgroup driver to use...
	I1209 23:59:55.177616   83859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:59:55.194262   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:59:55.210699   83859 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:59:55.210770   83859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:59:55.226308   83859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:59:55.242175   83859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:59:55.361845   83859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:59:55.500415   83859 docker.go:233] disabling docker service ...
	I1209 23:59:55.500487   83859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:59:55.515689   83859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:59:55.528651   83859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:59:55.663341   83859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:59:55.776773   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:59:55.790155   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:59:55.807749   83859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:59:55.807807   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.817580   83859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:59:55.817644   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.827975   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.837871   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.848118   83859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:59:55.858322   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.867754   83859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.884626   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.894254   83859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:59:55.903128   83859 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:59:55.903187   83859 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:59:55.914887   83859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:59:55.924665   83859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:56.028206   83859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:59:56.117484   83859 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:59:56.117573   83859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:59:56.122345   83859 start.go:563] Will wait 60s for crictl version
	I1209 23:59:56.122401   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.126032   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:59:56.161884   83859 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:59:56.161978   83859 ssh_runner.go:195] Run: crio --version
	I1209 23:59:56.194758   83859 ssh_runner.go:195] Run: crio --version
	I1209 23:59:56.224190   83859 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:59:56.225560   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:56.228550   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:56.228928   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:56.228950   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:56.229163   83859 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1209 23:59:56.233615   83859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:56.245890   83859 kubeadm.go:883] updating cluster {Name:no-preload-048296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-048296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:59:56.246079   83859 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:59:56.246132   83859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:56.285601   83859 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:59:56.285629   83859 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 23:59:56.285694   83859 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:56.285734   83859 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.285761   83859 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.285813   83859 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.285858   83859 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.285900   83859 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.285810   83859 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1209 23:59:56.285983   83859 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.287414   83859 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1209 23:59:56.287436   83859 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.287436   83859 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.287446   83859 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.287465   83859 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.287500   83859 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.287550   83859 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:56.287550   83859 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.437713   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.453590   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.457708   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.468937   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1209 23:59:56.474766   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.477321   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.483791   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.530686   83859 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1209 23:59:56.530735   83859 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.530786   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.603275   83859 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1209 23:59:56.603327   83859 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.603376   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.612591   83859 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1209 23:59:56.612635   83859 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.612686   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726597   83859 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1209 23:59:56.726643   83859 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.726650   83859 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1209 23:59:56.726682   83859 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.726692   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726728   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726752   83859 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1209 23:59:56.726777   83859 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.726808   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726824   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.726882   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.726889   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.795578   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.795624   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.795646   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.795581   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.795723   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.795847   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.905584   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.909282   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.927659   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.927832   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.927928   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.932487   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:53.256494   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:55.257888   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:54.991894   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:57.491500   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:55.645853   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.145844   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.644899   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:57.145431   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:57.645625   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:58.145096   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:58.645933   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:59.145181   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:59.645624   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:00.145874   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.971745   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1209 23:59:56.971844   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1209 23:59:56.987218   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1209 23:59:56.987380   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 23:59:57.050691   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:57.064778   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:57.087868   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:57.087972   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:57.089176   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1209 23:59:57.089235   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1209 23:59:57.089262   83859 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1209 23:59:57.089269   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 23:59:57.089311   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1209 23:59:57.089355   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1209 23:59:57.098939   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1209 23:59:57.099044   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 23:59:57.148482   83859 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1209 23:59:57.148532   83859 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:57.148588   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:57.173723   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1209 23:59:57.173810   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 23:59:57.186640   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1209 23:59:57.186734   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1210 00:00:00.723670   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.634331388s)
	I1210 00:00:00.723704   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1210 00:00:00.723717   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (3.634425647s)
	I1210 00:00:00.723751   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1210 00:00:00.723728   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 00:00:00.723767   83859 ssh_runner.go:235] Completed: which crictl: (3.575163822s)
	I1210 00:00:00.723815   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:00:00.723857   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (3.550033094s)
	I1210 00:00:00.723816   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 00:00:00.723909   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (3.537159613s)
	I1210 00:00:00.723934   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1210 00:00:00.723854   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (3.624679414s)
	I1210 00:00:00.723974   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1210 00:00:00.723880   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1209 23:59:57.756271   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:00.256451   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:02.256956   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:59.491859   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:01.992860   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:00.644886   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:01.145063   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:01.645932   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.145743   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.645396   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:03.145007   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:03.645927   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:04.145876   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:04.645825   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:05.145906   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.713826   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.989917623s)
	I1210 00:00:02.713866   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1210 00:00:02.713883   83859 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.99004494s)
	I1210 00:00:02.713896   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 00:00:02.713949   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:00:02.713952   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 00:00:02.753832   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:00:04.780306   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.066273178s)
	I1210 00:00:04.780344   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1210 00:00:04.780368   83859 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1210 00:00:04.780432   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1210 00:00:04.780430   83859 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.026560705s)
	I1210 00:00:04.780544   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 00:00:04.780618   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:00:06.756731   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.976269529s)
	I1210 00:00:06.756763   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1210 00:00:06.756774   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.976141757s)
	I1210 00:00:06.756793   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1210 00:00:06.756791   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 00:00:06.756846   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 00:00:04.258390   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:06.756422   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:04.490429   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:06.991054   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:08.991247   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:05.644919   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:06.145142   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:06.645240   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:07.144864   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:07.645218   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.145246   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.645965   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:09.145663   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:09.645731   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:10.145638   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.712114   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.955245193s)
	I1210 00:00:08.712142   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1210 00:00:08.712166   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 00:00:08.712212   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 00:00:10.892294   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.180058873s)
	I1210 00:00:10.892319   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1210 00:00:10.892352   83859 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:00:10.892391   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:00:11.542174   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 00:00:11.542214   83859 cache_images.go:123] Successfully loaded all cached images
	I1210 00:00:11.542219   83859 cache_images.go:92] duration metric: took 15.256564286s to LoadCachedImages
	I1210 00:00:11.542230   83859 kubeadm.go:934] updating node { 192.168.61.182 8443 v1.31.2 crio true true} ...
	I1210 00:00:11.542334   83859 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-048296 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-048296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:00:11.542397   83859 ssh_runner.go:195] Run: crio config
	I1210 00:00:11.586768   83859 cni.go:84] Creating CNI manager for ""
	I1210 00:00:11.586787   83859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:00:11.586795   83859 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:00:11.586817   83859 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.182 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-048296 NodeName:no-preload-048296 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:00:11.586932   83859 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-048296"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.182"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.182"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:00:11.586992   83859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:00:11.597936   83859 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:00:11.597996   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 00:00:11.608068   83859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 00:00:11.624881   83859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:00:11.640934   83859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1210 00:00:11.658113   83859 ssh_runner.go:195] Run: grep 192.168.61.182	control-plane.minikube.internal$ /etc/hosts
	I1210 00:00:11.662360   83859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:00:11.674732   83859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:00:11.803743   83859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:00:11.821169   83859 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296 for IP: 192.168.61.182
	I1210 00:00:11.821191   83859 certs.go:194] generating shared ca certs ...
	I1210 00:00:11.821211   83859 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:00:11.821404   83859 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1210 00:00:11.821485   83859 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1210 00:00:11.821502   83859 certs.go:256] generating profile certs ...
	I1210 00:00:11.821583   83859 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/client.key
	I1210 00:00:11.821644   83859 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/apiserver.key.9304569d
	I1210 00:00:11.821677   83859 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/proxy-client.key
	I1210 00:00:11.821783   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1210 00:00:11.821811   83859 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1210 00:00:11.821821   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:00:11.821848   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1210 00:00:11.821881   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:00:11.821917   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1210 00:00:11.821965   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1210 00:00:11.822649   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:00:11.867065   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 00:00:11.898744   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:00:11.932711   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:00:08.758664   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:11.257156   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:11.491011   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:13.491036   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:10.645462   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.145646   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.645921   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:12.145804   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:12.644924   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:13.145055   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:13.645811   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:14.144877   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:14.645709   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.145827   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.966670   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 00:00:11.997827   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:00:12.023344   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:00:12.048872   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:00:12.074332   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:00:12.097886   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1210 00:00:12.121883   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1210 00:00:12.145451   83859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:00:12.167089   83859 ssh_runner.go:195] Run: openssl version
	I1210 00:00:12.172747   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1210 00:00:12.183718   83859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1210 00:00:12.188458   83859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1210 00:00:12.188521   83859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1210 00:00:12.194537   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:00:12.205725   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:00:12.216441   83859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:00:12.221179   83859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:00:12.221237   83859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:00:12.226942   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:00:12.237830   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1210 00:00:12.248188   83859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1210 00:00:12.253689   83859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1210 00:00:12.253747   83859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1210 00:00:12.259326   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1210 00:00:12.269796   83859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:00:12.274316   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 00:00:12.280263   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 00:00:12.286235   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 00:00:12.292330   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 00:00:12.298347   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 00:00:12.304186   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 00:00:12.310189   83859 kubeadm.go:392] StartCluster: {Name:no-preload-048296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-048296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:00:12.310296   83859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:00:12.310349   83859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:00:12.346589   83859 cri.go:89] found id: ""
	I1210 00:00:12.346666   83859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 00:00:12.357674   83859 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 00:00:12.357701   83859 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 00:00:12.357753   83859 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 00:00:12.367817   83859 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 00:00:12.368827   83859 kubeconfig.go:125] found "no-preload-048296" server: "https://192.168.61.182:8443"
	I1210 00:00:12.371117   83859 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 00:00:12.380367   83859 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.182
	I1210 00:00:12.380393   83859 kubeadm.go:1160] stopping kube-system containers ...
	I1210 00:00:12.380404   83859 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 00:00:12.380446   83859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:00:12.413083   83859 cri.go:89] found id: ""
	I1210 00:00:12.413143   83859 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 00:00:12.429819   83859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:00:12.439244   83859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:00:12.439269   83859 kubeadm.go:157] found existing configuration files:
	
	I1210 00:00:12.439336   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:00:12.448182   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:00:12.448257   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:00:12.457759   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:00:12.467372   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:00:12.467449   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:00:12.476877   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:00:12.485831   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:00:12.485898   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:00:12.495537   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:00:12.504333   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:00:12.504398   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:00:12.514572   83859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:00:12.524134   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:12.619683   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.361844   83859 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.742129567s)
	I1210 00:00:14.361876   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.585241   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.653166   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.771204   83859 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:00:14.771306   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.272143   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.771676   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.789350   83859 api_server.go:72] duration metric: took 1.018150373s to wait for apiserver process to appear ...
	I1210 00:00:15.789378   83859 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:00:15.789396   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:15.789878   83859 api_server.go:269] stopped: https://192.168.61.182:8443/healthz: Get "https://192.168.61.182:8443/healthz": dial tcp 192.168.61.182:8443: connect: connection refused
	I1210 00:00:16.289518   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:13.757326   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:15.757843   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:15.491876   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:17.991160   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:18.572189   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 00:00:18.572233   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 00:00:18.572262   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:18.607691   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 00:00:18.607732   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 00:00:18.790007   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:18.794252   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 00:00:18.794281   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 00:00:19.289819   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:19.294079   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 00:00:19.294104   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 00:00:19.789647   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:19.794119   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 200:
	ok
	I1210 00:00:19.800447   83859 api_server.go:141] control plane version: v1.31.2
	I1210 00:00:19.800477   83859 api_server.go:131] duration metric: took 4.011091942s to wait for apiserver health ...
	I1210 00:00:19.800488   83859 cni.go:84] Creating CNI manager for ""
	I1210 00:00:19.800496   83859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:00:19.802341   83859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:00:15.645243   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:16.145027   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:16.645018   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:17.145001   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:17.644893   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:18.145189   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:18.645579   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.144946   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.645832   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:20.145634   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.803715   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:00:19.814324   83859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:00:19.861326   83859 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:00:19.873554   83859 system_pods.go:59] 8 kube-system pods found
	I1210 00:00:19.873588   83859 system_pods.go:61] "coredns-7c65d6cfc9-smnt7" [7d85cb49-3cbb-4133-acab-284ddf0b72f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 00:00:19.873597   83859 system_pods.go:61] "etcd-no-preload-048296" [3eeaede5-b759-49e5-8267-0aaf3c374178] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 00:00:19.873606   83859 system_pods.go:61] "kube-apiserver-no-preload-048296" [8e9d39ce-218a-4fe4-86ea-234d97a5e51d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 00:00:19.873615   83859 system_pods.go:61] "kube-controller-manager-no-preload-048296" [e28ccf7d-13e4-4b55-81d0-2dd348584bca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 00:00:19.873622   83859 system_pods.go:61] "kube-proxy-z479r" [eb6588a6-ac3c-4066-b69e-5613b03ade53] Running
	I1210 00:00:19.873634   83859 system_pods.go:61] "kube-scheduler-no-preload-048296" [0a184373-35d4-4ac6-92fb-65a4faf1c2e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 00:00:19.873646   83859 system_pods.go:61] "metrics-server-6867b74b74-sd58c" [19d64157-7b9b-4a39-8e0c-2c48729fbc93] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:00:19.873653   83859 system_pods.go:61] "storage-provisioner" [30b3ed5d-f843-4090-827f-e1ee6a7ea274] Running
	I1210 00:00:19.873661   83859 system_pods.go:74] duration metric: took 12.308897ms to wait for pod list to return data ...
	I1210 00:00:19.873668   83859 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:00:19.877459   83859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:00:19.877482   83859 node_conditions.go:123] node cpu capacity is 2
	I1210 00:00:19.877496   83859 node_conditions.go:105] duration metric: took 3.822698ms to run NodePressure ...
	I1210 00:00:19.877513   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:20.145838   83859 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 00:00:20.151038   83859 kubeadm.go:739] kubelet initialised
	I1210 00:00:20.151059   83859 kubeadm.go:740] duration metric: took 5.1997ms waiting for restarted kubelet to initialise ...
	I1210 00:00:20.151068   83859 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:00:20.157554   83859 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:20.162413   83859 pod_ready.go:98] node "no-preload-048296" hosting pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.162437   83859 pod_ready.go:82] duration metric: took 4.858113ms for pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace to be "Ready" ...
	E1210 00:00:20.162446   83859 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-048296" hosting pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.162452   83859 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:20.167285   83859 pod_ready.go:98] node "no-preload-048296" hosting pod "etcd-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.167312   83859 pod_ready.go:82] duration metric: took 4.84903ms for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	E1210 00:00:20.167321   83859 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-048296" hosting pod "etcd-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.167328   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:20.172365   83859 pod_ready.go:98] node "no-preload-048296" hosting pod "kube-apiserver-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.172386   83859 pod_ready.go:82] duration metric: took 5.052446ms for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	E1210 00:00:20.172395   83859 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-048296" hosting pod "kube-apiserver-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.172401   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:18.257283   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:20.756752   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:20.490469   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:22.990635   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:20.645094   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:21.145179   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:21.645829   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.145159   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.645390   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:23.145663   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:23.645205   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:24.145625   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:24.645299   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:25.144971   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.180104   83859 pod_ready.go:103] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:24.680524   83859 pod_ready.go:103] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:26.680818   83859 pod_ready.go:103] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:22.757621   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:25.257680   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:24.991719   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:27.490099   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:25.644977   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:26.145967   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:26.645297   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:27.144972   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:27.645030   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.145227   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.645104   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:29.144957   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:29.645516   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:30.145343   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.678941   83859 pod_ready.go:93] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:00:28.678972   83859 pod_ready.go:82] duration metric: took 8.506560777s for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:28.678985   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-z479r" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:28.684896   83859 pod_ready.go:93] pod "kube-proxy-z479r" in "kube-system" namespace has status "Ready":"True"
	I1210 00:00:28.684917   83859 pod_ready.go:82] duration metric: took 5.924915ms for pod "kube-proxy-z479r" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:28.684926   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:30.691918   83859 pod_ready.go:103] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:31.691333   83859 pod_ready.go:93] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:00:31.691362   83859 pod_ready.go:82] duration metric: took 3.006429416s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:31.691372   83859 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:27.756937   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:30.260931   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:29.990322   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:32.489835   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:30.645218   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:31.145393   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:31.645952   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:32.145695   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:32.644910   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.146012   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.645636   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:34.145664   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:34.645813   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:35.145365   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.697948   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:36.197360   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:32.756044   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:34.756585   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:36.756683   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:34.490676   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:36.491117   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:38.991231   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:35.645823   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:36.145410   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:36.645665   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:37.144994   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:37.645701   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.144880   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.645735   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:39.145806   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:39.645695   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:40.145692   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.198422   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:40.697971   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:39.256015   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:41.256607   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:41.490276   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:43.989182   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:40.644935   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:41.145320   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:41.645470   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:42.145714   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:42.644961   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:43.145687   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:43.644990   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:44.144958   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:44.645780   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:44.645870   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:44.680200   84547 cri.go:89] found id: ""
	I1210 00:00:44.680321   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.680334   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:44.680343   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:44.680413   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:44.715278   84547 cri.go:89] found id: ""
	I1210 00:00:44.715305   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.715312   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:44.715318   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:44.715377   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:44.749908   84547 cri.go:89] found id: ""
	I1210 00:00:44.749933   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.749941   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:44.749946   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:44.750008   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:44.786851   84547 cri.go:89] found id: ""
	I1210 00:00:44.786882   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.786893   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:44.786901   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:44.786966   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:44.830060   84547 cri.go:89] found id: ""
	I1210 00:00:44.830106   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.830117   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:44.830125   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:44.830191   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:44.865525   84547 cri.go:89] found id: ""
	I1210 00:00:44.865560   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.865571   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:44.865579   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:44.865643   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:44.902529   84547 cri.go:89] found id: ""
	I1210 00:00:44.902565   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.902575   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:44.902584   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:44.902647   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:44.938876   84547 cri.go:89] found id: ""
	I1210 00:00:44.938904   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.938914   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:44.938925   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:44.938939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:44.992533   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:44.992570   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:45.005990   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:45.006020   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:45.116774   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:45.116797   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:45.116810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:45.187376   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:45.187411   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:42.698555   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:45.198082   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:43.756559   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:45.756755   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:45.990399   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:47.991485   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:47.728964   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:47.742500   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:47.742560   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:47.775751   84547 cri.go:89] found id: ""
	I1210 00:00:47.775783   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.775793   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:47.775799   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:47.775848   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:47.810202   84547 cri.go:89] found id: ""
	I1210 00:00:47.810228   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.810235   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:47.810241   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:47.810302   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:47.844686   84547 cri.go:89] found id: ""
	I1210 00:00:47.844730   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.844739   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:47.844745   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:47.844802   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:47.884076   84547 cri.go:89] found id: ""
	I1210 00:00:47.884108   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.884119   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:47.884127   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:47.884188   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:47.923697   84547 cri.go:89] found id: ""
	I1210 00:00:47.923722   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.923729   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:47.923734   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:47.923791   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:47.955790   84547 cri.go:89] found id: ""
	I1210 00:00:47.955816   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.955824   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:47.955829   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:47.955890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:48.021501   84547 cri.go:89] found id: ""
	I1210 00:00:48.021529   84547 logs.go:282] 0 containers: []
	W1210 00:00:48.021537   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:48.021543   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:48.021592   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:48.054645   84547 cri.go:89] found id: ""
	I1210 00:00:48.054675   84547 logs.go:282] 0 containers: []
	W1210 00:00:48.054688   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:48.054699   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:48.054714   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:48.135706   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:48.135729   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:48.135746   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:48.212397   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:48.212438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:48.254002   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:48.254033   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:48.305858   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:48.305891   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:47.697503   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:49.698076   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:47.758210   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:50.257400   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:50.490507   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:52.989527   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:50.819644   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:50.834153   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:50.834233   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:50.872357   84547 cri.go:89] found id: ""
	I1210 00:00:50.872391   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.872402   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:50.872409   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:50.872471   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:50.909793   84547 cri.go:89] found id: ""
	I1210 00:00:50.909826   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.909839   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:50.909848   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:50.909915   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:50.947167   84547 cri.go:89] found id: ""
	I1210 00:00:50.947206   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.947219   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:50.947228   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:50.947304   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:50.987194   84547 cri.go:89] found id: ""
	I1210 00:00:50.987219   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.987228   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:50.987234   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:50.987287   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:51.027433   84547 cri.go:89] found id: ""
	I1210 00:00:51.027464   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.027476   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:51.027483   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:51.027586   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:51.062156   84547 cri.go:89] found id: ""
	I1210 00:00:51.062186   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.062200   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:51.062208   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:51.062267   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:51.099161   84547 cri.go:89] found id: ""
	I1210 00:00:51.099187   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.099195   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:51.099202   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:51.099249   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:51.134400   84547 cri.go:89] found id: ""
	I1210 00:00:51.134432   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.134445   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:51.134459   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:51.134474   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:51.171842   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:51.171869   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:51.222815   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:51.222854   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:51.236499   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:51.236537   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:51.307835   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:51.307856   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:51.307871   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:53.888771   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:53.902167   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:53.902234   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:53.940724   84547 cri.go:89] found id: ""
	I1210 00:00:53.940748   84547 logs.go:282] 0 containers: []
	W1210 00:00:53.940756   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:53.940767   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:53.940823   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:53.975076   84547 cri.go:89] found id: ""
	I1210 00:00:53.975111   84547 logs.go:282] 0 containers: []
	W1210 00:00:53.975122   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:53.975128   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:53.975191   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:54.011123   84547 cri.go:89] found id: ""
	I1210 00:00:54.011149   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.011157   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:54.011162   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:54.011207   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:54.043594   84547 cri.go:89] found id: ""
	I1210 00:00:54.043620   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.043628   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:54.043633   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:54.043679   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:54.076172   84547 cri.go:89] found id: ""
	I1210 00:00:54.076208   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.076219   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:54.076227   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:54.076292   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:54.110646   84547 cri.go:89] found id: ""
	I1210 00:00:54.110679   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.110691   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:54.110711   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:54.110784   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:54.145901   84547 cri.go:89] found id: ""
	I1210 00:00:54.145929   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.145940   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:54.145947   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:54.146007   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:54.178589   84547 cri.go:89] found id: ""
	I1210 00:00:54.178618   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.178629   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:54.178639   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:54.178652   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:54.231005   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:54.231040   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:54.244583   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:54.244608   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:54.318759   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:54.318787   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:54.318800   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:54.395975   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:54.396012   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:52.198012   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:54.199221   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:56.697929   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:52.756419   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:54.757807   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:57.257555   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:55.490621   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:57.491632   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:56.969699   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:56.985159   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:56.985232   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:57.020768   84547 cri.go:89] found id: ""
	I1210 00:00:57.020798   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.020807   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:57.020812   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:57.020861   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:57.055796   84547 cri.go:89] found id: ""
	I1210 00:00:57.055826   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.055834   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:57.055839   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:57.055897   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:57.089345   84547 cri.go:89] found id: ""
	I1210 00:00:57.089375   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.089385   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:57.089392   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:57.089460   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:57.123969   84547 cri.go:89] found id: ""
	I1210 00:00:57.123993   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.124004   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:57.124012   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:57.124075   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:57.158744   84547 cri.go:89] found id: ""
	I1210 00:00:57.158769   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.158777   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:57.158783   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:57.158842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:57.203933   84547 cri.go:89] found id: ""
	I1210 00:00:57.203954   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.203962   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:57.203968   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:57.204025   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:57.238180   84547 cri.go:89] found id: ""
	I1210 00:00:57.238213   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.238224   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:57.238231   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:57.238287   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:57.273583   84547 cri.go:89] found id: ""
	I1210 00:00:57.273612   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.273623   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:57.273633   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:57.273648   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:57.345992   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:57.346019   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:57.346035   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:57.428335   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:57.428369   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:57.466437   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:57.466472   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:57.517064   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:57.517099   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:00.030599   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:00.045151   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:00.045229   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:00.083050   84547 cri.go:89] found id: ""
	I1210 00:01:00.083074   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.083085   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:00.083093   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:00.083152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:00.119082   84547 cri.go:89] found id: ""
	I1210 00:01:00.119107   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.119120   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:00.119126   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:00.119185   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:00.166888   84547 cri.go:89] found id: ""
	I1210 00:01:00.166921   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.166931   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:00.166939   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:00.166998   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:00.209504   84547 cri.go:89] found id: ""
	I1210 00:01:00.209526   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.209533   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:00.209539   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:00.209595   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:00.247649   84547 cri.go:89] found id: ""
	I1210 00:01:00.247672   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.247680   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:00.247686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:00.247736   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:00.294419   84547 cri.go:89] found id: ""
	I1210 00:01:00.294445   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.294455   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:00.294463   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:00.294526   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:00.328634   84547 cri.go:89] found id: ""
	I1210 00:01:00.328667   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.328677   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:00.328684   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:00.328751   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:00.364667   84547 cri.go:89] found id: ""
	I1210 00:01:00.364696   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.364724   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:00.364733   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:00.364745   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:00.377518   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:00.377552   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:00.449145   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:00.449166   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:00.449178   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:58.698000   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:01.196968   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:59.756367   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:01.756407   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:59.989896   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:01.991121   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:00.529462   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:00.529499   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:00.570471   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:00.570503   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:03.122835   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:03.136329   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:03.136405   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:03.170352   84547 cri.go:89] found id: ""
	I1210 00:01:03.170380   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.170388   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:03.170393   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:03.170454   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:03.204339   84547 cri.go:89] found id: ""
	I1210 00:01:03.204368   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.204379   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:03.204386   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:03.204448   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:03.237516   84547 cri.go:89] found id: ""
	I1210 00:01:03.237559   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.237572   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:03.237579   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:03.237641   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:03.273379   84547 cri.go:89] found id: ""
	I1210 00:01:03.273407   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.273416   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:03.273421   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:03.274017   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:03.308987   84547 cri.go:89] found id: ""
	I1210 00:01:03.309026   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.309038   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:03.309046   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:03.309102   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:03.341913   84547 cri.go:89] found id: ""
	I1210 00:01:03.341943   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.341954   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:03.341961   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:03.342009   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:03.375403   84547 cri.go:89] found id: ""
	I1210 00:01:03.375429   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.375437   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:03.375442   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:03.375494   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:03.408352   84547 cri.go:89] found id: ""
	I1210 00:01:03.408380   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.408387   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:03.408397   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:03.408409   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:03.483789   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:03.483831   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:03.532021   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:03.532050   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:03.585095   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:03.585129   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:03.600062   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:03.600089   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:03.671633   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:03.197410   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:05.698110   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:03.757014   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:06.257259   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:04.490159   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:06.490493   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:08.991447   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:06.172345   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:06.199809   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:06.199897   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:06.240690   84547 cri.go:89] found id: ""
	I1210 00:01:06.240749   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.240760   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:06.240769   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:06.240822   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:06.275743   84547 cri.go:89] found id: ""
	I1210 00:01:06.275770   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.275779   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:06.275786   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:06.275848   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:06.310399   84547 cri.go:89] found id: ""
	I1210 00:01:06.310427   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.310438   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:06.310445   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:06.310507   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:06.345278   84547 cri.go:89] found id: ""
	I1210 00:01:06.345308   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.345320   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:06.345328   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:06.345394   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:06.381926   84547 cri.go:89] found id: ""
	I1210 00:01:06.381961   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.381988   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:06.381994   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:06.382064   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:06.415341   84547 cri.go:89] found id: ""
	I1210 00:01:06.415367   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.415377   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:06.415385   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:06.415438   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:06.449701   84547 cri.go:89] found id: ""
	I1210 00:01:06.449733   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.449743   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:06.449750   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:06.449814   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:06.484271   84547 cri.go:89] found id: ""
	I1210 00:01:06.484298   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.484307   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:06.484317   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:06.484333   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:06.570279   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:06.570313   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:06.607812   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:06.607845   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:06.658206   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:06.658243   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:06.670949   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:06.670978   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:06.745785   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:09.246367   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:09.259390   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:09.259462   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:09.291811   84547 cri.go:89] found id: ""
	I1210 00:01:09.291841   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.291853   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:09.291865   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:09.291922   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:09.326996   84547 cri.go:89] found id: ""
	I1210 00:01:09.327027   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.327038   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:09.327045   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:09.327109   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:09.361026   84547 cri.go:89] found id: ""
	I1210 00:01:09.361062   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.361073   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:09.361081   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:09.361152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:09.397116   84547 cri.go:89] found id: ""
	I1210 00:01:09.397148   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.397159   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:09.397166   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:09.397225   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:09.428989   84547 cri.go:89] found id: ""
	I1210 00:01:09.429025   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.429039   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:09.429046   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:09.429111   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:09.462491   84547 cri.go:89] found id: ""
	I1210 00:01:09.462535   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.462547   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:09.462572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:09.462652   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:09.502705   84547 cri.go:89] found id: ""
	I1210 00:01:09.502728   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.502735   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:09.502740   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:09.502798   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:09.538721   84547 cri.go:89] found id: ""
	I1210 00:01:09.538744   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.538754   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:09.538766   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:09.538779   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:09.590732   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:09.590768   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:09.604928   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:09.604960   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:09.681457   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:09.681484   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:09.681498   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:09.758116   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:09.758149   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:07.698904   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:10.198215   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:08.755738   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:10.756172   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:10.992252   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:13.491283   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:12.307016   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:12.320284   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:12.320371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:12.351984   84547 cri.go:89] found id: ""
	I1210 00:01:12.352007   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.352016   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:12.352024   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:12.352086   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:12.387797   84547 cri.go:89] found id: ""
	I1210 00:01:12.387829   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.387840   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:12.387848   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:12.387924   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:12.417934   84547 cri.go:89] found id: ""
	I1210 00:01:12.417968   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.417979   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:12.417987   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:12.418048   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:12.451771   84547 cri.go:89] found id: ""
	I1210 00:01:12.451802   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.451815   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:12.451822   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:12.451890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:12.485007   84547 cri.go:89] found id: ""
	I1210 00:01:12.485037   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.485048   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:12.485055   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:12.485117   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:12.527851   84547 cri.go:89] found id: ""
	I1210 00:01:12.527879   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.527895   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:12.527905   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:12.527967   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:12.560341   84547 cri.go:89] found id: ""
	I1210 00:01:12.560364   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.560372   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:12.560377   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:12.560428   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:12.593118   84547 cri.go:89] found id: ""
	I1210 00:01:12.593143   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.593150   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:12.593158   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:12.593169   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:12.658694   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:12.658717   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:12.658749   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:12.739742   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:12.739776   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:12.780949   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:12.780977   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:12.829570   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:12.829607   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:15.344239   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:15.358424   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:15.358527   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:15.390712   84547 cri.go:89] found id: ""
	I1210 00:01:15.390744   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.390757   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:15.390764   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:15.390847   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:15.424198   84547 cri.go:89] found id: ""
	I1210 00:01:15.424226   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.424234   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:15.424239   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:15.424306   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:15.458340   84547 cri.go:89] found id: ""
	I1210 00:01:15.458363   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.458370   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:15.458377   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:15.458422   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:15.491468   84547 cri.go:89] found id: ""
	I1210 00:01:15.491497   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.491507   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:15.491520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:15.491597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:12.698224   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:15.197841   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:12.756618   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:14.759492   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:17.256075   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:15.491494   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:17.989410   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:15.528405   84547 cri.go:89] found id: ""
	I1210 00:01:15.528437   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.528448   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:15.528455   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:15.528517   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:15.562966   84547 cri.go:89] found id: ""
	I1210 00:01:15.562995   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.563005   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:15.563012   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:15.563063   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:15.595815   84547 cri.go:89] found id: ""
	I1210 00:01:15.595838   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.595845   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:15.595850   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:15.595907   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:15.631296   84547 cri.go:89] found id: ""
	I1210 00:01:15.631322   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.631333   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:15.631347   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:15.631362   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:15.680177   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:15.680213   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:15.693685   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:15.693724   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:15.760285   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:15.760312   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:15.760326   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:15.837814   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:15.837855   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:18.377112   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:18.390167   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:18.390230   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:18.424095   84547 cri.go:89] found id: ""
	I1210 00:01:18.424127   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.424140   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:18.424150   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:18.424216   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:18.456840   84547 cri.go:89] found id: ""
	I1210 00:01:18.456868   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.456876   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:18.456882   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:18.456938   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:18.492109   84547 cri.go:89] found id: ""
	I1210 00:01:18.492134   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.492145   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:18.492153   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:18.492212   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:18.530445   84547 cri.go:89] found id: ""
	I1210 00:01:18.530472   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.530480   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:18.530486   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:18.530549   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:18.567036   84547 cri.go:89] found id: ""
	I1210 00:01:18.567060   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.567070   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:18.567077   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:18.567136   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:18.606829   84547 cri.go:89] found id: ""
	I1210 00:01:18.606853   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.606863   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:18.606870   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:18.606927   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:18.646032   84547 cri.go:89] found id: ""
	I1210 00:01:18.646061   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.646070   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:18.646075   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:18.646127   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:18.683969   84547 cri.go:89] found id: ""
	I1210 00:01:18.683997   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.684008   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:18.684019   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:18.684036   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:18.735760   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:18.735807   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:18.751689   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:18.751724   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:18.823252   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:18.823272   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:18.823287   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:18.908071   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:18.908110   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:17.698577   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:20.197901   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:19.757398   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:22.255920   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:20.490512   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:22.990168   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:21.445415   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:21.458091   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:21.458152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:21.492615   84547 cri.go:89] found id: ""
	I1210 00:01:21.492646   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.492657   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:21.492664   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:21.492718   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:21.529564   84547 cri.go:89] found id: ""
	I1210 00:01:21.529586   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.529594   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:21.529599   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:21.529669   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:21.566409   84547 cri.go:89] found id: ""
	I1210 00:01:21.566441   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.566450   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:21.566456   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:21.566528   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:21.601783   84547 cri.go:89] found id: ""
	I1210 00:01:21.601815   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.601827   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:21.601835   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:21.601895   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:21.635740   84547 cri.go:89] found id: ""
	I1210 00:01:21.635762   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.635769   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:21.635775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:21.635831   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:21.667777   84547 cri.go:89] found id: ""
	I1210 00:01:21.667806   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.667826   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:21.667834   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:21.667894   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:21.704355   84547 cri.go:89] found id: ""
	I1210 00:01:21.704380   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.704388   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:21.704398   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:21.704457   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:21.737365   84547 cri.go:89] found id: ""
	I1210 00:01:21.737398   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.737410   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:21.737422   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:21.737438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:21.751394   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:21.751434   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:21.823217   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:21.823240   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:21.823255   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:21.910097   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:21.910131   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:21.950087   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:21.950123   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:24.504626   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:24.517920   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:24.517997   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:24.551766   84547 cri.go:89] found id: ""
	I1210 00:01:24.551806   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.551814   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:24.551821   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:24.551880   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:24.588210   84547 cri.go:89] found id: ""
	I1210 00:01:24.588246   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.588256   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:24.588263   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:24.588341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:24.620569   84547 cri.go:89] found id: ""
	I1210 00:01:24.620598   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.620607   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:24.620613   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:24.620673   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:24.657527   84547 cri.go:89] found id: ""
	I1210 00:01:24.657550   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.657558   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:24.657564   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:24.657636   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:24.689382   84547 cri.go:89] found id: ""
	I1210 00:01:24.689410   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.689418   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:24.689423   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:24.689475   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:24.727170   84547 cri.go:89] found id: ""
	I1210 00:01:24.727207   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.727224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:24.727230   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:24.727280   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:24.760733   84547 cri.go:89] found id: ""
	I1210 00:01:24.760759   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.760769   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:24.760775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:24.760842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:24.794750   84547 cri.go:89] found id: ""
	I1210 00:01:24.794782   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.794791   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:24.794799   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:24.794810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:24.847403   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:24.847441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:24.862240   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:24.862273   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:24.936373   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:24.936396   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:24.936409   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:25.011126   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:25.011160   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:22.198527   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:24.697981   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:24.257466   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:26.258086   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:24.990792   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:26.991843   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:27.551134   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:27.564526   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:27.564665   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:27.600652   84547 cri.go:89] found id: ""
	I1210 00:01:27.600677   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.600685   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:27.600690   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:27.600753   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:27.643593   84547 cri.go:89] found id: ""
	I1210 00:01:27.643625   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.643636   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:27.643649   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:27.643705   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:27.680633   84547 cri.go:89] found id: ""
	I1210 00:01:27.680664   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.680676   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:27.680684   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:27.680748   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:27.721790   84547 cri.go:89] found id: ""
	I1210 00:01:27.721824   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.721832   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:27.721837   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:27.721901   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:27.756462   84547 cri.go:89] found id: ""
	I1210 00:01:27.756492   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.756504   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:27.756512   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:27.756574   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:27.792692   84547 cri.go:89] found id: ""
	I1210 00:01:27.792728   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.792838   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:27.792851   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:27.792912   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:27.830065   84547 cri.go:89] found id: ""
	I1210 00:01:27.830098   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.830107   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:27.830116   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:27.830182   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:27.867516   84547 cri.go:89] found id: ""
	I1210 00:01:27.867557   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.867580   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:27.867595   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:27.867611   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:27.917622   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:27.917660   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:27.931687   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:27.931717   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:28.003827   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:28.003860   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:28.003876   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:28.081375   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:28.081410   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:27.197999   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:29.198364   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:31.698061   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:28.755855   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:30.756687   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:29.489992   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:31.490850   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:33.990464   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:30.622885   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:30.635990   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:30.636065   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:30.671418   84547 cri.go:89] found id: ""
	I1210 00:01:30.671453   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.671465   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:30.671475   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:30.671548   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:30.707168   84547 cri.go:89] found id: ""
	I1210 00:01:30.707203   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.707214   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:30.707222   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:30.707275   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:30.741349   84547 cri.go:89] found id: ""
	I1210 00:01:30.741377   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.741386   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:30.741395   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:30.741449   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:30.774952   84547 cri.go:89] found id: ""
	I1210 00:01:30.774985   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.774997   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:30.775005   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:30.775062   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:30.809596   84547 cri.go:89] found id: ""
	I1210 00:01:30.809623   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.809631   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:30.809636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:30.809699   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:30.844256   84547 cri.go:89] found id: ""
	I1210 00:01:30.844288   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.844300   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:30.844308   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:30.844371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:30.876086   84547 cri.go:89] found id: ""
	I1210 00:01:30.876113   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.876124   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:30.876131   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:30.876195   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:30.910858   84547 cri.go:89] found id: ""
	I1210 00:01:30.910884   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.910895   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:30.910905   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:30.910920   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:30.990855   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:30.990876   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:30.990888   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:31.069186   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:31.069221   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:31.109653   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:31.109689   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:31.164962   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:31.165001   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:33.679784   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:33.692222   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:33.692310   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:33.727937   84547 cri.go:89] found id: ""
	I1210 00:01:33.727960   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.727967   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:33.727973   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:33.728022   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:33.762122   84547 cri.go:89] found id: ""
	I1210 00:01:33.762151   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.762162   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:33.762171   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:33.762237   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:33.793855   84547 cri.go:89] found id: ""
	I1210 00:01:33.793883   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.793894   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:33.793903   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:33.793961   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:33.827240   84547 cri.go:89] found id: ""
	I1210 00:01:33.827280   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.827292   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:33.827300   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:33.827366   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:33.863705   84547 cri.go:89] found id: ""
	I1210 00:01:33.863728   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.863738   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:33.863743   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:33.863792   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:33.897189   84547 cri.go:89] found id: ""
	I1210 00:01:33.897213   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.897224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:33.897229   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:33.897282   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:33.930001   84547 cri.go:89] found id: ""
	I1210 00:01:33.930034   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.930044   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:33.930052   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:33.930113   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:33.967330   84547 cri.go:89] found id: ""
	I1210 00:01:33.967359   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.967367   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:33.967378   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:33.967390   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:34.051952   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:34.051996   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:34.087729   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:34.087762   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:34.137879   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:34.137915   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:34.151989   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:34.152025   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:34.225587   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:33.698633   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:36.197023   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:33.257021   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:35.258050   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:36.489900   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:38.490643   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:36.726065   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:36.740513   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:36.740594   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:36.774959   84547 cri.go:89] found id: ""
	I1210 00:01:36.774991   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.775000   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:36.775005   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:36.775070   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:36.808888   84547 cri.go:89] found id: ""
	I1210 00:01:36.808921   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.808934   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:36.808941   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:36.809001   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:36.841695   84547 cri.go:89] found id: ""
	I1210 00:01:36.841727   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.841738   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:36.841748   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:36.841840   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:36.877479   84547 cri.go:89] found id: ""
	I1210 00:01:36.877509   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.877522   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:36.877531   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:36.877600   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:36.909232   84547 cri.go:89] found id: ""
	I1210 00:01:36.909257   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.909265   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:36.909271   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:36.909328   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:36.945858   84547 cri.go:89] found id: ""
	I1210 00:01:36.945892   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.945904   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:36.945912   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:36.945981   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:36.978559   84547 cri.go:89] found id: ""
	I1210 00:01:36.978592   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.978604   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:36.978611   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:36.978674   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:37.015523   84547 cri.go:89] found id: ""
	I1210 00:01:37.015555   84547 logs.go:282] 0 containers: []
	W1210 00:01:37.015587   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:37.015598   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:37.015613   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:37.094825   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:37.094876   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:37.134442   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:37.134470   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:37.184691   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:37.184728   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:37.198801   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:37.198828   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:37.268415   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:39.768790   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:39.783106   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:39.783184   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:39.816629   84547 cri.go:89] found id: ""
	I1210 00:01:39.816655   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.816665   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:39.816672   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:39.816749   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:39.851518   84547 cri.go:89] found id: ""
	I1210 00:01:39.851550   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.851581   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:39.851590   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:39.851648   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:39.887790   84547 cri.go:89] found id: ""
	I1210 00:01:39.887827   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.887842   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:39.887852   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:39.887924   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:39.922230   84547 cri.go:89] found id: ""
	I1210 00:01:39.922255   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.922262   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:39.922268   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:39.922332   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:39.957142   84547 cri.go:89] found id: ""
	I1210 00:01:39.957174   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.957184   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:39.957192   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:39.957254   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:40.004583   84547 cri.go:89] found id: ""
	I1210 00:01:40.004609   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.004618   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:40.004624   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:40.004675   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:40.038278   84547 cri.go:89] found id: ""
	I1210 00:01:40.038300   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.038308   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:40.038313   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:40.038366   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:40.071920   84547 cri.go:89] found id: ""
	I1210 00:01:40.071947   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.071954   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:40.071963   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:40.071973   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:40.142005   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:40.142033   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:40.142049   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:40.219413   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:40.219452   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:40.260786   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:40.260822   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:40.315943   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:40.315988   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:38.198247   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:40.198455   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:37.756243   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:39.756567   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:41.757107   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:40.990316   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:42.990948   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:42.832592   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:42.847532   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:42.847630   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:42.879313   84547 cri.go:89] found id: ""
	I1210 00:01:42.879341   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.879348   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:42.879354   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:42.879403   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:42.914839   84547 cri.go:89] found id: ""
	I1210 00:01:42.914866   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.914877   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:42.914884   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:42.914947   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:42.949514   84547 cri.go:89] found id: ""
	I1210 00:01:42.949553   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.949564   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:42.949572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:42.949632   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:42.987974   84547 cri.go:89] found id: ""
	I1210 00:01:42.988004   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.988021   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:42.988029   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:42.988087   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:43.022835   84547 cri.go:89] found id: ""
	I1210 00:01:43.022860   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.022867   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:43.022873   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:43.022921   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:43.055940   84547 cri.go:89] found id: ""
	I1210 00:01:43.055967   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.055975   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:43.055981   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:43.056030   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:43.088728   84547 cri.go:89] found id: ""
	I1210 00:01:43.088754   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.088762   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:43.088769   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:43.088827   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:43.122809   84547 cri.go:89] found id: ""
	I1210 00:01:43.122842   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.122853   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:43.122865   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:43.122881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:43.172243   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:43.172277   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:43.186566   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:43.186596   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:43.264301   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:43.264327   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:43.264341   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:43.339804   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:43.339848   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:42.698066   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:45.197813   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:44.256069   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:46.256746   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:45.490253   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:47.989687   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:45.881356   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:45.894702   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:45.894779   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:45.928927   84547 cri.go:89] found id: ""
	I1210 00:01:45.928951   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.928958   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:45.928964   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:45.929009   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:45.965271   84547 cri.go:89] found id: ""
	I1210 00:01:45.965303   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.965315   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:45.965323   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:45.965392   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:45.999059   84547 cri.go:89] found id: ""
	I1210 00:01:45.999082   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.999090   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:45.999095   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:45.999140   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:46.034425   84547 cri.go:89] found id: ""
	I1210 00:01:46.034456   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.034468   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:46.034476   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:46.034529   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:46.067946   84547 cri.go:89] found id: ""
	I1210 00:01:46.067970   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.067986   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:46.067993   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:46.068056   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:46.101671   84547 cri.go:89] found id: ""
	I1210 00:01:46.101699   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.101710   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:46.101718   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:46.101783   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:46.135860   84547 cri.go:89] found id: ""
	I1210 00:01:46.135885   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.135893   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:46.135898   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:46.135948   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:46.177398   84547 cri.go:89] found id: ""
	I1210 00:01:46.177439   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.177450   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:46.177461   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:46.177476   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:46.248134   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:46.248160   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:46.248175   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:46.323652   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:46.323688   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:46.364017   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:46.364044   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:46.429480   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:46.429524   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:48.956518   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:48.970209   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:48.970294   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:49.008933   84547 cri.go:89] found id: ""
	I1210 00:01:49.008966   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.008977   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:49.008986   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:49.009050   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:49.044802   84547 cri.go:89] found id: ""
	I1210 00:01:49.044833   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.044843   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:49.044850   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:49.044921   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:49.077484   84547 cri.go:89] found id: ""
	I1210 00:01:49.077517   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.077525   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:49.077530   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:49.077576   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:49.110144   84547 cri.go:89] found id: ""
	I1210 00:01:49.110174   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.110186   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:49.110193   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:49.110255   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:49.142591   84547 cri.go:89] found id: ""
	I1210 00:01:49.142622   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.142633   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:49.142646   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:49.142709   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:49.175456   84547 cri.go:89] found id: ""
	I1210 00:01:49.175486   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.175497   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:49.175505   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:49.175603   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:49.208341   84547 cri.go:89] found id: ""
	I1210 00:01:49.208368   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.208379   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:49.208387   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:49.208445   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:49.241486   84547 cri.go:89] found id: ""
	I1210 00:01:49.241509   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.241518   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:49.241615   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:49.241647   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:49.280023   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:49.280051   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:49.328822   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:49.328858   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:49.343076   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:49.343104   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:49.418010   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:49.418037   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:49.418051   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:47.198917   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:49.697840   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:48.757230   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:51.256927   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:49.990701   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:52.490649   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:52.004053   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:52.017350   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:52.017418   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:52.055617   84547 cri.go:89] found id: ""
	I1210 00:01:52.055641   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.055648   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:52.055654   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:52.055712   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:52.088601   84547 cri.go:89] found id: ""
	I1210 00:01:52.088629   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.088637   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:52.088642   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:52.088694   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:52.121880   84547 cri.go:89] found id: ""
	I1210 00:01:52.121912   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.121922   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:52.121928   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:52.121986   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:52.161281   84547 cri.go:89] found id: ""
	I1210 00:01:52.161321   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.161334   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:52.161341   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:52.161406   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:52.222768   84547 cri.go:89] found id: ""
	I1210 00:01:52.222793   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.222800   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:52.222806   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:52.222862   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:52.271441   84547 cri.go:89] found id: ""
	I1210 00:01:52.271465   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.271473   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:52.271479   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:52.271526   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:52.311128   84547 cri.go:89] found id: ""
	I1210 00:01:52.311152   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.311160   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:52.311165   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:52.311211   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:52.348862   84547 cri.go:89] found id: ""
	I1210 00:01:52.348885   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.348892   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:52.348900   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:52.348913   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:52.401280   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:52.401324   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:52.415532   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:52.415580   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:52.484956   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:52.484979   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:52.484994   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:52.565102   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:52.565137   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:55.106446   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:55.121756   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:55.121846   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:55.158601   84547 cri.go:89] found id: ""
	I1210 00:01:55.158632   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.158643   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:55.158650   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:55.158712   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:55.192424   84547 cri.go:89] found id: ""
	I1210 00:01:55.192454   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.192464   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:55.192471   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:55.192530   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:55.226178   84547 cri.go:89] found id: ""
	I1210 00:01:55.226204   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.226213   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:55.226222   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:55.226285   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:55.264123   84547 cri.go:89] found id: ""
	I1210 00:01:55.264148   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.264161   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:55.264169   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:55.264226   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:55.302476   84547 cri.go:89] found id: ""
	I1210 00:01:55.302503   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.302512   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:55.302520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:55.302597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:55.342308   84547 cri.go:89] found id: ""
	I1210 00:01:55.342341   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.342352   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:55.342360   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:55.342419   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:55.382203   84547 cri.go:89] found id: ""
	I1210 00:01:55.382226   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.382232   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:55.382238   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:55.382286   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:55.421381   84547 cri.go:89] found id: ""
	I1210 00:01:55.421409   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.421421   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:55.421432   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:55.421449   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:55.473758   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:55.473793   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:55.488138   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:55.488166   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:01:52.198005   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:54.697589   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:56.697748   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:53.756310   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:55.756964   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:54.990271   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:57.490303   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	W1210 00:01:55.567216   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:55.567240   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:55.567251   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:55.648276   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:55.648319   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:58.185245   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:58.199015   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:58.199091   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:58.232327   84547 cri.go:89] found id: ""
	I1210 00:01:58.232352   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.232360   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:58.232368   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:58.232436   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:58.270312   84547 cri.go:89] found id: ""
	I1210 00:01:58.270342   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.270353   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:58.270360   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:58.270420   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:58.303389   84547 cri.go:89] found id: ""
	I1210 00:01:58.303415   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.303422   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:58.303427   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:58.303486   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:58.338703   84547 cri.go:89] found id: ""
	I1210 00:01:58.338735   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.338747   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:58.338755   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:58.338817   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:58.376727   84547 cri.go:89] found id: ""
	I1210 00:01:58.376759   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.376770   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:58.376779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:58.376841   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:58.410192   84547 cri.go:89] found id: ""
	I1210 00:01:58.410217   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.410224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:58.410230   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:58.410286   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:58.449772   84547 cri.go:89] found id: ""
	I1210 00:01:58.449794   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.449802   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:58.449807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:58.449859   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:58.484285   84547 cri.go:89] found id: ""
	I1210 00:01:58.484316   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.484328   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:58.484339   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:58.484356   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:58.538402   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:58.538438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:58.551361   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:58.551391   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:58.613809   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:58.613836   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:58.613850   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:58.689606   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:58.689640   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:58.698283   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:01.197599   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:57.758229   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:00.256339   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:59.490413   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:01.491035   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:03.990725   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:01.230924   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:01.244878   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:01.244990   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:01.280475   84547 cri.go:89] found id: ""
	I1210 00:02:01.280501   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.280509   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:01.280514   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:01.280561   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:01.323042   84547 cri.go:89] found id: ""
	I1210 00:02:01.323065   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.323071   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:01.323077   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:01.323124   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:01.356146   84547 cri.go:89] found id: ""
	I1210 00:02:01.356171   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.356181   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:01.356190   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:01.356247   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:01.392542   84547 cri.go:89] found id: ""
	I1210 00:02:01.392567   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.392577   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:01.392584   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:01.392721   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:01.426232   84547 cri.go:89] found id: ""
	I1210 00:02:01.426257   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.426268   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:01.426275   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:01.426341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:01.459754   84547 cri.go:89] found id: ""
	I1210 00:02:01.459786   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.459798   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:01.459806   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:01.459865   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:01.492406   84547 cri.go:89] found id: ""
	I1210 00:02:01.492435   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.492445   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:01.492450   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:01.492499   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:01.532012   84547 cri.go:89] found id: ""
	I1210 00:02:01.532034   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.532042   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:01.532049   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:01.532060   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:01.583145   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:01.583181   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:01.596910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:01.596939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:01.670480   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:01.670506   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:01.670534   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:01.748001   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:01.748041   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:04.291065   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:04.304507   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:04.304587   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:04.341673   84547 cri.go:89] found id: ""
	I1210 00:02:04.341700   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.341713   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:04.341720   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:04.341772   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:04.379743   84547 cri.go:89] found id: ""
	I1210 00:02:04.379776   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.379787   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:04.379795   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:04.379856   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:04.413532   84547 cri.go:89] found id: ""
	I1210 00:02:04.413562   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.413573   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:04.413588   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:04.413648   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:04.449192   84547 cri.go:89] found id: ""
	I1210 00:02:04.449221   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.449231   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:04.449238   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:04.449324   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:04.484638   84547 cri.go:89] found id: ""
	I1210 00:02:04.484666   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.484677   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:04.484686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:04.484745   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:04.524854   84547 cri.go:89] found id: ""
	I1210 00:02:04.524889   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.524903   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:04.524912   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:04.524976   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:04.564697   84547 cri.go:89] found id: ""
	I1210 00:02:04.564726   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.564737   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:04.564748   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:04.564797   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:04.603506   84547 cri.go:89] found id: ""
	I1210 00:02:04.603534   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.603544   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:04.603554   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:04.603583   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:04.653025   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:04.653062   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:04.666833   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:04.666878   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:04.745491   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:04.745513   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:04.745526   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:04.825267   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:04.825304   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:03.702878   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:06.197834   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:02.755541   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:04.757334   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:07.256160   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:06.490491   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:08.491144   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:07.365419   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:07.378968   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:07.379030   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:07.412830   84547 cri.go:89] found id: ""
	I1210 00:02:07.412859   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.412868   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:07.412873   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:07.412938   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:07.445622   84547 cri.go:89] found id: ""
	I1210 00:02:07.445661   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.445674   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:07.445682   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:07.445742   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:07.480431   84547 cri.go:89] found id: ""
	I1210 00:02:07.480466   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.480474   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:07.480480   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:07.480533   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:07.518741   84547 cri.go:89] found id: ""
	I1210 00:02:07.518776   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.518790   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:07.518797   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:07.518860   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:07.552193   84547 cri.go:89] found id: ""
	I1210 00:02:07.552216   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.552223   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:07.552229   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:07.552275   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:07.585762   84547 cri.go:89] found id: ""
	I1210 00:02:07.585784   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.585792   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:07.585798   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:07.585843   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:07.618619   84547 cri.go:89] found id: ""
	I1210 00:02:07.618645   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.618653   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:07.618659   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:07.618709   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:07.653377   84547 cri.go:89] found id: ""
	I1210 00:02:07.653418   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.653428   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:07.653440   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:07.653456   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:07.709366   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:07.709401   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:07.723762   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:07.723792   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:07.804849   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:07.804869   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:07.804886   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:07.887117   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:07.887154   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:10.423120   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:10.436563   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:10.436628   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:10.470622   84547 cri.go:89] found id: ""
	I1210 00:02:10.470650   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.470658   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:10.470664   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:10.470735   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:10.506211   84547 cri.go:89] found id: ""
	I1210 00:02:10.506238   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.506250   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:10.506257   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:10.506368   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:08.198217   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:10.699364   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:09.256492   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:11.256897   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:10.990662   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:13.491635   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:10.541846   84547 cri.go:89] found id: ""
	I1210 00:02:10.541871   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.541879   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:10.541885   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:10.541952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:10.581391   84547 cri.go:89] found id: ""
	I1210 00:02:10.581416   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.581427   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:10.581435   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:10.581503   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:10.615172   84547 cri.go:89] found id: ""
	I1210 00:02:10.615206   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.615216   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:10.615223   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:10.615289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:10.650791   84547 cri.go:89] found id: ""
	I1210 00:02:10.650813   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.650821   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:10.650826   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:10.650876   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:10.685428   84547 cri.go:89] found id: ""
	I1210 00:02:10.685452   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.685460   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:10.685466   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:10.685524   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:10.719139   84547 cri.go:89] found id: ""
	I1210 00:02:10.719174   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.719186   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:10.719196   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:10.719211   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:10.732045   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:10.732073   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:10.805084   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:10.805111   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:10.805127   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:10.888301   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:10.888337   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:10.926005   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:10.926033   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:13.479317   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:13.494021   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:13.494089   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:13.527753   84547 cri.go:89] found id: ""
	I1210 00:02:13.527787   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.527799   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:13.527806   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:13.527862   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:13.560574   84547 cri.go:89] found id: ""
	I1210 00:02:13.560607   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.560618   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:13.560625   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:13.560688   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:13.595507   84547 cri.go:89] found id: ""
	I1210 00:02:13.595551   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.595584   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:13.595592   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:13.595657   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:13.629848   84547 cri.go:89] found id: ""
	I1210 00:02:13.629873   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.629884   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:13.629892   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:13.629952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:13.662407   84547 cri.go:89] found id: ""
	I1210 00:02:13.662436   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.662447   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:13.662454   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:13.662509   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:13.694892   84547 cri.go:89] found id: ""
	I1210 00:02:13.694921   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.694940   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:13.694949   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:13.695013   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:13.733248   84547 cri.go:89] found id: ""
	I1210 00:02:13.733327   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.733349   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:13.733358   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:13.733426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:13.771853   84547 cri.go:89] found id: ""
	I1210 00:02:13.771884   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.771894   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:13.771906   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:13.771920   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:13.846886   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:13.846913   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:13.846928   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:13.929722   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:13.929758   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:13.968401   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:13.968427   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:14.019770   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:14.019811   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:13.197726   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:15.198437   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:13.257271   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:15.755750   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:15.990729   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:18.490851   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:16.532794   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:16.547084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:16.547172   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:16.584046   84547 cri.go:89] found id: ""
	I1210 00:02:16.584073   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.584084   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:16.584091   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:16.584150   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:16.615981   84547 cri.go:89] found id: ""
	I1210 00:02:16.616012   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.616023   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:16.616030   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:16.616094   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:16.647939   84547 cri.go:89] found id: ""
	I1210 00:02:16.647967   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.647979   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:16.647986   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:16.648048   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:16.682585   84547 cri.go:89] found id: ""
	I1210 00:02:16.682620   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.682632   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:16.682640   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:16.682695   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:16.718593   84547 cri.go:89] found id: ""
	I1210 00:02:16.718620   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.718628   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:16.718634   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:16.718687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:16.752513   84547 cri.go:89] found id: ""
	I1210 00:02:16.752536   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.752543   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:16.752549   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:16.752598   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:16.787679   84547 cri.go:89] found id: ""
	I1210 00:02:16.787702   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.787710   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:16.787715   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:16.787777   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:16.821275   84547 cri.go:89] found id: ""
	I1210 00:02:16.821297   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.821305   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:16.821312   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:16.821322   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:16.872500   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:16.872533   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:16.885185   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:16.885212   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:16.962658   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:16.962679   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:16.962694   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:17.039689   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:17.039726   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:19.578060   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:19.590601   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:19.590675   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:19.622145   84547 cri.go:89] found id: ""
	I1210 00:02:19.622170   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.622179   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:19.622184   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:19.622231   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:19.653204   84547 cri.go:89] found id: ""
	I1210 00:02:19.653232   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.653243   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:19.653250   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:19.653317   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:19.685104   84547 cri.go:89] found id: ""
	I1210 00:02:19.685137   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.685148   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:19.685156   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:19.685213   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:19.719087   84547 cri.go:89] found id: ""
	I1210 00:02:19.719113   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.719121   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:19.719126   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:19.719176   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:19.753210   84547 cri.go:89] found id: ""
	I1210 00:02:19.753239   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.753250   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:19.753258   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:19.753317   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:19.787608   84547 cri.go:89] found id: ""
	I1210 00:02:19.787635   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.787645   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:19.787653   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:19.787718   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:19.823111   84547 cri.go:89] found id: ""
	I1210 00:02:19.823142   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.823154   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:19.823161   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:19.823221   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:19.858274   84547 cri.go:89] found id: ""
	I1210 00:02:19.858301   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.858312   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:19.858323   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:19.858344   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:19.905386   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:19.905420   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:19.918995   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:19.919026   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:19.990676   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:19.990700   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:19.990716   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:20.064396   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:20.064435   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:17.698109   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:20.197963   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:17.756633   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:19.757487   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:22.257036   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:20.990682   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:23.490383   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:22.604477   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:22.617408   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:22.617487   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:22.650144   84547 cri.go:89] found id: ""
	I1210 00:02:22.650178   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.650189   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:22.650197   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:22.650293   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:22.684342   84547 cri.go:89] found id: ""
	I1210 00:02:22.684367   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.684375   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:22.684380   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:22.684429   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:22.718168   84547 cri.go:89] found id: ""
	I1210 00:02:22.718194   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.718204   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:22.718211   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:22.718271   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:22.755192   84547 cri.go:89] found id: ""
	I1210 00:02:22.755222   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.755232   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:22.755240   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:22.755297   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:22.793095   84547 cri.go:89] found id: ""
	I1210 00:02:22.793129   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.793141   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:22.793149   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:22.793209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:22.831116   84547 cri.go:89] found id: ""
	I1210 00:02:22.831146   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.831157   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:22.831164   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:22.831235   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:22.869652   84547 cri.go:89] found id: ""
	I1210 00:02:22.869686   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.869704   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:22.869709   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:22.869756   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:22.907480   84547 cri.go:89] found id: ""
	I1210 00:02:22.907504   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.907513   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:22.907520   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:22.907533   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:22.983880   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:22.983902   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:22.983915   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:23.062840   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:23.062880   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:23.101427   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:23.101476   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:23.153861   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:23.153893   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:22.198638   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:24.698708   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:24.757526   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:27.256296   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:25.491835   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:27.990755   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:25.666908   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:25.680047   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:25.680125   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:25.715821   84547 cri.go:89] found id: ""
	I1210 00:02:25.715853   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.715865   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:25.715873   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:25.715931   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:25.748191   84547 cri.go:89] found id: ""
	I1210 00:02:25.748222   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.748232   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:25.748239   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:25.748295   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:25.786474   84547 cri.go:89] found id: ""
	I1210 00:02:25.786498   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.786505   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:25.786511   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:25.786569   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:25.819282   84547 cri.go:89] found id: ""
	I1210 00:02:25.819319   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.819330   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:25.819337   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:25.819400   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:25.858064   84547 cri.go:89] found id: ""
	I1210 00:02:25.858091   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.858100   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:25.858106   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:25.858169   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:25.893335   84547 cri.go:89] found id: ""
	I1210 00:02:25.893362   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.893373   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:25.893380   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:25.893439   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:25.927225   84547 cri.go:89] found id: ""
	I1210 00:02:25.927254   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.927265   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:25.927272   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:25.927341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:25.963678   84547 cri.go:89] found id: ""
	I1210 00:02:25.963715   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.963725   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:25.963738   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:25.963756   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:25.994462   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:25.994488   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:26.061394   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:26.061442   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:26.061458   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:26.135152   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:26.135187   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:26.171961   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:26.171994   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:28.723326   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:28.735721   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:28.735787   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:28.769493   84547 cri.go:89] found id: ""
	I1210 00:02:28.769516   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.769525   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:28.769530   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:28.769582   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:28.805594   84547 cri.go:89] found id: ""
	I1210 00:02:28.805641   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.805652   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:28.805658   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:28.805704   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:28.838982   84547 cri.go:89] found id: ""
	I1210 00:02:28.839007   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.839015   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:28.839020   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:28.839072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:28.876607   84547 cri.go:89] found id: ""
	I1210 00:02:28.876627   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.876635   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:28.876641   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:28.876700   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:28.910352   84547 cri.go:89] found id: ""
	I1210 00:02:28.910379   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.910386   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:28.910392   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:28.910464   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:28.943896   84547 cri.go:89] found id: ""
	I1210 00:02:28.943917   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.943924   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:28.943930   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:28.943978   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:28.976554   84547 cri.go:89] found id: ""
	I1210 00:02:28.976583   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.976590   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:28.976596   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:28.976644   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:29.011334   84547 cri.go:89] found id: ""
	I1210 00:02:29.011357   84547 logs.go:282] 0 containers: []
	W1210 00:02:29.011364   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:29.011372   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:29.011384   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:29.061418   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:29.061464   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:29.074887   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:29.074913   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:29.147240   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:29.147261   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:29.147272   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:29.225058   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:29.225094   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:27.197730   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:29.197818   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:31.198788   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:29.256802   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:31.258194   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:30.490822   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:32.990271   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:31.763432   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:31.777257   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:31.777359   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:31.812558   84547 cri.go:89] found id: ""
	I1210 00:02:31.812588   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.812598   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:31.812606   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:31.812668   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:31.847042   84547 cri.go:89] found id: ""
	I1210 00:02:31.847065   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.847082   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:31.847088   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:31.847135   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:31.881182   84547 cri.go:89] found id: ""
	I1210 00:02:31.881208   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.881216   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:31.881221   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:31.881272   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:31.917367   84547 cri.go:89] found id: ""
	I1210 00:02:31.917393   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.917401   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:31.917407   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:31.917454   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:31.956836   84547 cri.go:89] found id: ""
	I1210 00:02:31.956868   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.956883   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:31.956893   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:31.956952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:31.993125   84547 cri.go:89] found id: ""
	I1210 00:02:31.993151   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.993160   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:31.993168   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:31.993225   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:32.031648   84547 cri.go:89] found id: ""
	I1210 00:02:32.031679   84547 logs.go:282] 0 containers: []
	W1210 00:02:32.031687   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:32.031692   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:32.031746   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:32.065894   84547 cri.go:89] found id: ""
	I1210 00:02:32.065923   84547 logs.go:282] 0 containers: []
	W1210 00:02:32.065930   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:32.065941   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:32.065957   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:32.133473   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:32.133496   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:32.133508   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:32.213129   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:32.213161   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:32.251424   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:32.251453   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:32.302284   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:32.302323   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:34.815963   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:34.829460   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:34.829543   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:34.865811   84547 cri.go:89] found id: ""
	I1210 00:02:34.865837   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.865847   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:34.865854   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:34.865916   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:34.899185   84547 cri.go:89] found id: ""
	I1210 00:02:34.899211   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.899220   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:34.899227   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:34.899289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:34.932472   84547 cri.go:89] found id: ""
	I1210 00:02:34.932500   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.932509   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:34.932517   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:34.932581   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:34.965817   84547 cri.go:89] found id: ""
	I1210 00:02:34.965846   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.965857   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:34.965866   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:34.965930   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:35.000036   84547 cri.go:89] found id: ""
	I1210 00:02:35.000066   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.000077   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:35.000084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:35.000139   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:35.033808   84547 cri.go:89] found id: ""
	I1210 00:02:35.033839   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.033850   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:35.033857   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:35.033916   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:35.066242   84547 cri.go:89] found id: ""
	I1210 00:02:35.066269   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.066278   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:35.066285   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:35.066349   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:35.100740   84547 cri.go:89] found id: ""
	I1210 00:02:35.100763   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.100771   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:35.100779   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:35.100792   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:35.155483   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:35.155520   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:35.168910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:35.168939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:35.243234   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:35.243252   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:35.243263   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:35.320622   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:35.320657   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:33.698543   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:36.198250   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:33.756108   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:35.757682   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:34.990969   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:37.495166   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:37.855684   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:37.869056   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:37.869156   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:37.903720   84547 cri.go:89] found id: ""
	I1210 00:02:37.903748   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.903759   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:37.903766   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:37.903859   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:37.937741   84547 cri.go:89] found id: ""
	I1210 00:02:37.937780   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.937791   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:37.937808   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:37.937869   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:37.975255   84547 cri.go:89] found id: ""
	I1210 00:02:37.975281   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.975292   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:37.975299   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:37.975359   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:38.017348   84547 cri.go:89] found id: ""
	I1210 00:02:38.017381   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.017393   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:38.017400   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:38.017460   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:38.051039   84547 cri.go:89] found id: ""
	I1210 00:02:38.051065   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.051073   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:38.051079   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:38.051129   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:38.085752   84547 cri.go:89] found id: ""
	I1210 00:02:38.085782   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.085791   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:38.085799   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:38.085858   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:38.119975   84547 cri.go:89] found id: ""
	I1210 00:02:38.120004   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.120014   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:38.120021   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:38.120086   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:38.161467   84547 cri.go:89] found id: ""
	I1210 00:02:38.161499   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.161526   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:38.161537   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:38.161551   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:38.222277   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:38.222314   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:38.239300   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:38.239332   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:38.308997   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:38.309016   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:38.309032   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:38.394064   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:38.394108   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:38.198484   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:40.697916   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:38.257723   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:40.756530   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:39.990445   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:41.993952   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:40.933406   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:40.945862   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:40.945937   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:40.979503   84547 cri.go:89] found id: ""
	I1210 00:02:40.979532   84547 logs.go:282] 0 containers: []
	W1210 00:02:40.979540   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:40.979545   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:40.979619   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:41.016758   84547 cri.go:89] found id: ""
	I1210 00:02:41.016792   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.016803   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:41.016811   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:41.016873   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:41.053562   84547 cri.go:89] found id: ""
	I1210 00:02:41.053593   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.053601   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:41.053607   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:41.053667   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:41.086713   84547 cri.go:89] found id: ""
	I1210 00:02:41.086745   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.086757   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:41.086767   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:41.086830   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:41.122901   84547 cri.go:89] found id: ""
	I1210 00:02:41.122935   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.122945   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:41.122952   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:41.123011   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:41.156306   84547 cri.go:89] found id: ""
	I1210 00:02:41.156337   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.156355   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:41.156362   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:41.156423   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:41.189840   84547 cri.go:89] found id: ""
	I1210 00:02:41.189871   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.189882   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:41.189890   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:41.189946   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:41.223019   84547 cri.go:89] found id: ""
	I1210 00:02:41.223051   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.223061   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:41.223072   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:41.223088   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:41.275608   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:41.275640   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:41.289181   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:41.289210   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:41.358375   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:41.358404   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:41.358420   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:41.440214   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:41.440250   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:43.980600   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:43.993110   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:43.993165   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:44.026688   84547 cri.go:89] found id: ""
	I1210 00:02:44.026721   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.026732   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:44.026741   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:44.026796   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:44.062914   84547 cri.go:89] found id: ""
	I1210 00:02:44.062936   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.062943   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:44.062948   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:44.062999   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:44.105974   84547 cri.go:89] found id: ""
	I1210 00:02:44.106001   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.106009   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:44.106014   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:44.106061   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:44.140239   84547 cri.go:89] found id: ""
	I1210 00:02:44.140265   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.140274   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:44.140280   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:44.140338   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:44.175754   84547 cri.go:89] found id: ""
	I1210 00:02:44.175785   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.175796   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:44.175803   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:44.175870   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:44.211663   84547 cri.go:89] found id: ""
	I1210 00:02:44.211694   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.211705   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:44.211712   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:44.211776   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:44.244796   84547 cri.go:89] found id: ""
	I1210 00:02:44.244821   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.244831   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:44.244837   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:44.244898   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:44.282491   84547 cri.go:89] found id: ""
	I1210 00:02:44.282515   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.282528   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:44.282549   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:44.282562   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:44.335284   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:44.335328   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:44.349489   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:44.349530   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:44.418643   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:44.418668   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:44.418682   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:44.493901   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:44.493932   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:43.197494   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:45.198341   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:43.256947   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:45.257225   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:47.258073   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:44.491413   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:46.990521   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:48.991847   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:47.033132   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:47.046322   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:47.046403   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:47.078490   84547 cri.go:89] found id: ""
	I1210 00:02:47.078521   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.078533   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:47.078541   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:47.078602   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:47.111457   84547 cri.go:89] found id: ""
	I1210 00:02:47.111479   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.111487   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:47.111492   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:47.111538   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:47.144643   84547 cri.go:89] found id: ""
	I1210 00:02:47.144678   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.144689   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:47.144696   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:47.144757   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:47.178106   84547 cri.go:89] found id: ""
	I1210 00:02:47.178131   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.178141   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:47.178148   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:47.178213   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:47.215670   84547 cri.go:89] found id: ""
	I1210 00:02:47.215697   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.215712   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:47.215718   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:47.215767   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:47.248916   84547 cri.go:89] found id: ""
	I1210 00:02:47.248941   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.248948   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:47.248953   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:47.249002   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:47.287632   84547 cri.go:89] found id: ""
	I1210 00:02:47.287660   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.287671   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:47.287680   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:47.287745   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:47.327064   84547 cri.go:89] found id: ""
	I1210 00:02:47.327094   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.327103   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:47.327112   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:47.327126   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:47.341132   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:47.341176   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:47.417100   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:47.417121   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:47.417134   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:47.502612   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:47.502648   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:47.541312   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:47.541339   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:50.095403   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:50.108145   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:50.108202   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:50.140424   84547 cri.go:89] found id: ""
	I1210 00:02:50.140451   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.140462   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:50.140472   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:50.140532   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:50.173836   84547 cri.go:89] found id: ""
	I1210 00:02:50.173859   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.173866   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:50.173872   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:50.173928   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:50.213916   84547 cri.go:89] found id: ""
	I1210 00:02:50.213937   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.213944   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:50.213949   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:50.213997   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:50.246854   84547 cri.go:89] found id: ""
	I1210 00:02:50.246889   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.246899   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:50.246907   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:50.246956   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:50.281416   84547 cri.go:89] found id: ""
	I1210 00:02:50.281448   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.281456   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:50.281462   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:50.281511   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:50.313263   84547 cri.go:89] found id: ""
	I1210 00:02:50.313296   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.313308   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:50.313318   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:50.313385   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:50.347421   84547 cri.go:89] found id: ""
	I1210 00:02:50.347453   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.347463   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:50.347470   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:50.347544   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:50.383111   84547 cri.go:89] found id: ""
	I1210 00:02:50.383134   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.383142   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:50.383151   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:50.383162   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:50.421982   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:50.422013   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:50.475478   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:50.475520   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:50.489202   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:50.489256   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:02:47.199043   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:49.698055   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:51.698813   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:49.756585   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:51.757407   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:51.489831   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:53.490572   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	W1210 00:02:50.559501   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:50.559539   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:50.559552   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:53.136042   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:53.149149   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:53.149227   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:53.189323   84547 cri.go:89] found id: ""
	I1210 00:02:53.189349   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.189357   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:53.189365   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:53.189425   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:53.225235   84547 cri.go:89] found id: ""
	I1210 00:02:53.225269   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.225281   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:53.225288   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:53.225347   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:53.263465   84547 cri.go:89] found id: ""
	I1210 00:02:53.263492   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.263502   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:53.263510   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:53.263597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:53.297544   84547 cri.go:89] found id: ""
	I1210 00:02:53.297571   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.297583   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:53.297591   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:53.297656   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:53.331731   84547 cri.go:89] found id: ""
	I1210 00:02:53.331755   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.331762   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:53.331767   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:53.331815   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:53.367395   84547 cri.go:89] found id: ""
	I1210 00:02:53.367427   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.367440   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:53.367447   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:53.367511   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:53.403297   84547 cri.go:89] found id: ""
	I1210 00:02:53.403324   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.403332   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:53.403338   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:53.403398   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:53.437126   84547 cri.go:89] found id: ""
	I1210 00:02:53.437150   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.437158   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:53.437166   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:53.437177   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:53.489875   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:53.489914   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:53.503915   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:53.503940   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:53.578086   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:53.578114   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:53.578127   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:53.658463   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:53.658501   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:53.699332   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:56.197941   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:54.257053   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:56.258352   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:55.990548   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:58.489947   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:56.197093   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:56.211959   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:56.212020   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:56.250462   84547 cri.go:89] found id: ""
	I1210 00:02:56.250484   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.250493   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:56.250498   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:56.250552   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:56.286828   84547 cri.go:89] found id: ""
	I1210 00:02:56.286854   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.286865   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:56.286872   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:56.286939   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:56.320750   84547 cri.go:89] found id: ""
	I1210 00:02:56.320779   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.320787   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:56.320793   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:56.320840   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:56.355903   84547 cri.go:89] found id: ""
	I1210 00:02:56.355943   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.355954   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:56.355960   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:56.356026   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:56.395979   84547 cri.go:89] found id: ""
	I1210 00:02:56.396007   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.396018   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:56.396025   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:56.396081   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:56.431481   84547 cri.go:89] found id: ""
	I1210 00:02:56.431506   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.431514   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:56.431520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:56.431594   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:56.470318   84547 cri.go:89] found id: ""
	I1210 00:02:56.470349   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.470356   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:56.470361   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:56.470410   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:56.506388   84547 cri.go:89] found id: ""
	I1210 00:02:56.506411   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.506418   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:56.506429   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:56.506441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:56.558438   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:56.558483   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:56.571981   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:56.572004   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:56.634665   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:56.634697   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:56.634713   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:56.715663   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:56.715697   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:59.255305   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:59.268846   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:59.268930   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:59.302334   84547 cri.go:89] found id: ""
	I1210 00:02:59.302359   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.302366   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:59.302372   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:59.302426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:59.336380   84547 cri.go:89] found id: ""
	I1210 00:02:59.336409   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.336419   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:59.336425   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:59.336492   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:59.370179   84547 cri.go:89] found id: ""
	I1210 00:02:59.370201   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.370210   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:59.370214   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:59.370268   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:59.403199   84547 cri.go:89] found id: ""
	I1210 00:02:59.403222   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.403229   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:59.403236   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:59.403307   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:59.438650   84547 cri.go:89] found id: ""
	I1210 00:02:59.438673   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.438681   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:59.438686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:59.438736   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:59.473166   84547 cri.go:89] found id: ""
	I1210 00:02:59.473191   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.473199   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:59.473205   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:59.473264   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:59.506857   84547 cri.go:89] found id: ""
	I1210 00:02:59.506879   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.506888   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:59.506902   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:59.506963   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:59.551488   84547 cri.go:89] found id: ""
	I1210 00:02:59.551515   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.551526   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:59.551542   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:59.551557   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:59.605032   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:59.605069   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:59.619238   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:59.619271   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:59.690772   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:59.690798   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:59.690813   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:59.774424   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:59.774460   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:58.198128   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:00.698106   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:58.756596   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:01.257970   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:00.490268   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:02.990951   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:02.315240   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:02.329636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:02.329728   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:02.368571   84547 cri.go:89] found id: ""
	I1210 00:03:02.368599   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.368609   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:02.368621   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:02.368687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:02.406107   84547 cri.go:89] found id: ""
	I1210 00:03:02.406136   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.406148   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:02.406155   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:02.406219   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:02.443050   84547 cri.go:89] found id: ""
	I1210 00:03:02.443077   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.443091   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:02.443098   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:02.443146   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:02.484425   84547 cri.go:89] found id: ""
	I1210 00:03:02.484451   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.484461   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:02.484469   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:02.484536   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:02.525624   84547 cri.go:89] found id: ""
	I1210 00:03:02.525647   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.525655   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:02.525661   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:02.525711   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:02.564808   84547 cri.go:89] found id: ""
	I1210 00:03:02.564839   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.564850   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:02.564856   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:02.564907   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:02.598322   84547 cri.go:89] found id: ""
	I1210 00:03:02.598346   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.598354   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:02.598359   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:02.598417   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:02.632256   84547 cri.go:89] found id: ""
	I1210 00:03:02.632310   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.632322   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:02.632334   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:02.632348   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:02.686025   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:02.686063   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:02.701214   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:02.701237   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:02.773453   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:02.773477   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:02.773491   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:02.862017   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:02.862068   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:05.401014   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:05.415009   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:05.415072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:05.454916   84547 cri.go:89] found id: ""
	I1210 00:03:05.454945   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.454955   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:05.454962   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:05.455023   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:05.492955   84547 cri.go:89] found id: ""
	I1210 00:03:05.492981   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.492988   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:05.492995   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:05.493046   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:02.698652   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.196840   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:03.755858   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.757395   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.489875   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:07.990053   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.539646   84547 cri.go:89] found id: ""
	I1210 00:03:05.539670   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.539683   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:05.539690   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:05.539755   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:05.574534   84547 cri.go:89] found id: ""
	I1210 00:03:05.574559   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.574567   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:05.574572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:05.574632   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:05.608141   84547 cri.go:89] found id: ""
	I1210 00:03:05.608166   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.608174   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:05.608180   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:05.608243   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:05.643725   84547 cri.go:89] found id: ""
	I1210 00:03:05.643751   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.643759   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:05.643765   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:05.643812   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:05.677730   84547 cri.go:89] found id: ""
	I1210 00:03:05.677760   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.677772   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:05.677779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:05.677846   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:05.712475   84547 cri.go:89] found id: ""
	I1210 00:03:05.712507   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.712519   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:05.712531   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:05.712548   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:05.764532   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:05.764569   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:05.777910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:05.777938   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:05.851070   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:05.851099   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:05.851114   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:05.929518   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:05.929554   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:08.465166   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:08.478666   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:08.478723   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:08.513763   84547 cri.go:89] found id: ""
	I1210 00:03:08.513795   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.513808   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:08.513816   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:08.513877   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:08.548272   84547 cri.go:89] found id: ""
	I1210 00:03:08.548299   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.548306   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:08.548313   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:08.548397   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:08.581773   84547 cri.go:89] found id: ""
	I1210 00:03:08.581801   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.581812   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:08.581820   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:08.581884   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:08.613625   84547 cri.go:89] found id: ""
	I1210 00:03:08.613655   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.613664   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:08.613669   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:08.613716   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:08.644618   84547 cri.go:89] found id: ""
	I1210 00:03:08.644644   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.644652   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:08.644658   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:08.644717   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:08.676840   84547 cri.go:89] found id: ""
	I1210 00:03:08.676867   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.676879   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:08.676887   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:08.676952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:08.709918   84547 cri.go:89] found id: ""
	I1210 00:03:08.709952   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.709961   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:08.709969   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:08.710029   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:08.740842   84547 cri.go:89] found id: ""
	I1210 00:03:08.740871   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.740882   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:08.740893   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:08.740907   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:08.808679   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:08.808705   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:08.808721   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:08.884692   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:08.884733   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:08.920922   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:08.920952   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:08.971404   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:08.971441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:07.197548   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:09.197749   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:11.198894   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:08.258477   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:10.755482   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:10.490495   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:12.990709   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:11.484905   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:11.498791   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:11.498865   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:11.535784   84547 cri.go:89] found id: ""
	I1210 00:03:11.535809   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.535820   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:11.535827   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:11.535890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:11.567981   84547 cri.go:89] found id: ""
	I1210 00:03:11.568006   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.568019   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:11.568026   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:11.568083   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:11.601188   84547 cri.go:89] found id: ""
	I1210 00:03:11.601213   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.601224   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:11.601231   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:11.601289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:11.635759   84547 cri.go:89] found id: ""
	I1210 00:03:11.635787   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.635798   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:11.635807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:11.635867   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:11.675512   84547 cri.go:89] found id: ""
	I1210 00:03:11.675537   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.675547   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:11.675554   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:11.675638   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:11.709889   84547 cri.go:89] found id: ""
	I1210 00:03:11.709912   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.709922   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:11.709929   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:11.709987   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:11.744571   84547 cri.go:89] found id: ""
	I1210 00:03:11.744603   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.744610   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:11.744616   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:11.744677   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:11.779091   84547 cri.go:89] found id: ""
	I1210 00:03:11.779123   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.779136   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:11.779146   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:11.779156   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:11.830682   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:11.830726   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:11.844352   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:11.844383   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:11.915025   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:11.915046   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:11.915058   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:11.998581   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:11.998620   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:14.536408   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:14.550250   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:14.550321   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:14.586005   84547 cri.go:89] found id: ""
	I1210 00:03:14.586037   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.586049   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:14.586057   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:14.586118   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:14.620124   84547 cri.go:89] found id: ""
	I1210 00:03:14.620159   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.620170   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:14.620178   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:14.620231   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:14.655079   84547 cri.go:89] found id: ""
	I1210 00:03:14.655104   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.655112   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:14.655117   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:14.655178   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:14.688253   84547 cri.go:89] found id: ""
	I1210 00:03:14.688285   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.688298   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:14.688308   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:14.688371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:14.726904   84547 cri.go:89] found id: ""
	I1210 00:03:14.726932   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.726940   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:14.726945   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:14.726994   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:14.764836   84547 cri.go:89] found id: ""
	I1210 00:03:14.764868   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.764881   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:14.764890   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:14.764955   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:14.803557   84547 cri.go:89] found id: ""
	I1210 00:03:14.803605   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.803616   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:14.803621   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:14.803674   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:14.841070   84547 cri.go:89] found id: ""
	I1210 00:03:14.841102   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.841122   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:14.841137   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:14.841161   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:14.907607   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:14.907631   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:14.907644   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:14.985179   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:14.985209   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:15.022654   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:15.022687   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:15.075224   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:15.075260   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:13.697072   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:15.697892   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:12.757232   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:14.758529   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:17.256541   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:15.490871   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:17.990140   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:17.589836   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:17.604045   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:17.604102   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:17.643211   84547 cri.go:89] found id: ""
	I1210 00:03:17.643241   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.643251   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:17.643260   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:17.643320   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:17.675436   84547 cri.go:89] found id: ""
	I1210 00:03:17.675462   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.675472   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:17.675479   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:17.675538   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:17.709428   84547 cri.go:89] found id: ""
	I1210 00:03:17.709465   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.709476   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:17.709484   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:17.709544   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:17.764088   84547 cri.go:89] found id: ""
	I1210 00:03:17.764121   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.764132   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:17.764139   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:17.764200   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:17.797908   84547 cri.go:89] found id: ""
	I1210 00:03:17.797933   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.797944   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:17.797953   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:17.798016   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:17.829580   84547 cri.go:89] found id: ""
	I1210 00:03:17.829609   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.829620   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:17.829628   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:17.829687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:17.865106   84547 cri.go:89] found id: ""
	I1210 00:03:17.865136   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.865145   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:17.865150   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:17.865200   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:17.896757   84547 cri.go:89] found id: ""
	I1210 00:03:17.896791   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.896803   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:17.896814   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:17.896830   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:17.977210   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:17.977244   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:18.016843   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:18.016867   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:18.065263   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:18.065294   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:18.078188   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:18.078212   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:18.148164   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:17.698675   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:20.199732   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:19.755949   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:21.756959   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:19.990714   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:21.990766   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:23.991443   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:23.991468   83900 pod_ready.go:82] duration metric: took 4m0.007840011s for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	E1210 00:03:23.991477   83900 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1210 00:03:23.991484   83900 pod_ready.go:39] duration metric: took 4m6.196076299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:03:23.991501   83900 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:03:23.991534   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:23.991610   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:24.032198   83900 cri.go:89] found id: "07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:24.032227   83900 cri.go:89] found id: ""
	I1210 00:03:24.032237   83900 logs.go:282] 1 containers: [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7]
	I1210 00:03:24.032303   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.037671   83900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:24.037746   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:24.076467   83900 cri.go:89] found id: "c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:24.076499   83900 cri.go:89] found id: ""
	I1210 00:03:24.076507   83900 logs.go:282] 1 containers: [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9]
	I1210 00:03:24.076557   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.081125   83900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:24.081193   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:24.125433   83900 cri.go:89] found id: "db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:24.125472   83900 cri.go:89] found id: ""
	I1210 00:03:24.125483   83900 logs.go:282] 1 containers: [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71]
	I1210 00:03:24.125542   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.131023   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:24.131097   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:24.173215   83900 cri.go:89] found id: "f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:24.173239   83900 cri.go:89] found id: ""
	I1210 00:03:24.173247   83900 logs.go:282] 1 containers: [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de]
	I1210 00:03:24.173303   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.179027   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:24.179108   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:24.228905   83900 cri.go:89] found id: "a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:24.228936   83900 cri.go:89] found id: ""
	I1210 00:03:24.228946   83900 logs.go:282] 1 containers: [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994]
	I1210 00:03:24.229004   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.234441   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:24.234520   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:20.648921   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:20.661458   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:20.661516   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:20.693473   84547 cri.go:89] found id: ""
	I1210 00:03:20.693509   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.693518   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:20.693524   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:20.693576   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:20.726279   84547 cri.go:89] found id: ""
	I1210 00:03:20.726301   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.726309   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:20.726314   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:20.726375   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:20.759895   84547 cri.go:89] found id: ""
	I1210 00:03:20.759922   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.759931   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:20.759936   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:20.759988   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:20.793934   84547 cri.go:89] found id: ""
	I1210 00:03:20.793964   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.793974   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:20.793982   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:20.794049   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:20.828040   84547 cri.go:89] found id: ""
	I1210 00:03:20.828066   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.828077   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:20.828084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:20.828143   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:20.861917   84547 cri.go:89] found id: ""
	I1210 00:03:20.861950   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.861960   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:20.861967   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:20.862028   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:20.896458   84547 cri.go:89] found id: ""
	I1210 00:03:20.896481   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.896489   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:20.896494   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:20.896551   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:20.931014   84547 cri.go:89] found id: ""
	I1210 00:03:20.931044   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.931052   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:20.931061   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:20.931072   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:20.968693   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:20.968718   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:21.021880   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:21.021917   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:21.035848   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:21.035881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:21.104570   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:21.104604   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:21.104621   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:23.679447   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:23.692326   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:23.692426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:23.729773   84547 cri.go:89] found id: ""
	I1210 00:03:23.729796   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.729804   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:23.729809   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:23.729855   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:23.765875   84547 cri.go:89] found id: ""
	I1210 00:03:23.765905   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.765915   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:23.765922   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:23.765984   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:23.800785   84547 cri.go:89] found id: ""
	I1210 00:03:23.800821   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.800831   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:23.800838   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:23.800902   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:23.838137   84547 cri.go:89] found id: ""
	I1210 00:03:23.838160   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.838168   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:23.838173   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:23.838222   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:23.869916   84547 cri.go:89] found id: ""
	I1210 00:03:23.869947   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.869958   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:23.869966   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:23.870027   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:23.903939   84547 cri.go:89] found id: ""
	I1210 00:03:23.903962   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.903969   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:23.903975   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:23.904021   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:23.937092   84547 cri.go:89] found id: ""
	I1210 00:03:23.937119   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.937127   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:23.937133   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:23.937194   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:23.970562   84547 cri.go:89] found id: ""
	I1210 00:03:23.970599   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.970611   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:23.970622   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:23.970641   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:23.983364   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:23.983394   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:24.066624   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:24.066645   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:24.066657   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:24.151466   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:24.151502   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:24.204590   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:24.204615   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:22.698354   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:24.698462   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:24.256922   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:26.755965   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:24.279840   83900 cri.go:89] found id: "a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:24.279859   83900 cri.go:89] found id: ""
	I1210 00:03:24.279866   83900 logs.go:282] 1 containers: [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538]
	I1210 00:03:24.279922   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.283898   83900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:24.283972   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:24.319541   83900 cri.go:89] found id: ""
	I1210 00:03:24.319598   83900 logs.go:282] 0 containers: []
	W1210 00:03:24.319612   83900 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:24.319624   83900 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:03:24.319686   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:03:24.356205   83900 cri.go:89] found id: "b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:24.356228   83900 cri.go:89] found id: "e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:24.356234   83900 cri.go:89] found id: ""
	I1210 00:03:24.356242   83900 logs.go:282] 2 containers: [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1]
	I1210 00:03:24.356302   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.361772   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.366656   83900 logs.go:123] Gathering logs for kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] ...
	I1210 00:03:24.366690   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:24.408922   83900 logs.go:123] Gathering logs for kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] ...
	I1210 00:03:24.408955   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:24.466720   83900 logs.go:123] Gathering logs for storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] ...
	I1210 00:03:24.466762   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:24.500254   83900 logs.go:123] Gathering logs for storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] ...
	I1210 00:03:24.500290   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:24.535025   83900 logs.go:123] Gathering logs for container status ...
	I1210 00:03:24.535058   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:24.575706   83900 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:24.575732   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:24.645227   83900 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:24.645265   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:24.658833   83900 logs.go:123] Gathering logs for kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] ...
	I1210 00:03:24.658878   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:24.702076   83900 logs.go:123] Gathering logs for kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] ...
	I1210 00:03:24.702107   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:24.738869   83900 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:24.738900   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:25.204715   83900 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:25.204753   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:03:25.320267   83900 logs.go:123] Gathering logs for etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] ...
	I1210 00:03:25.320315   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:25.370599   83900 logs.go:123] Gathering logs for coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] ...
	I1210 00:03:25.370635   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:27.910863   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:27.925070   83900 api_server.go:72] duration metric: took 4m17.856306845s to wait for apiserver process to appear ...
	I1210 00:03:27.925098   83900 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:03:27.925164   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:27.925227   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:27.959991   83900 cri.go:89] found id: "07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:27.960017   83900 cri.go:89] found id: ""
	I1210 00:03:27.960026   83900 logs.go:282] 1 containers: [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7]
	I1210 00:03:27.960081   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:27.964069   83900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:27.964128   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:27.997618   83900 cri.go:89] found id: "c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:27.997640   83900 cri.go:89] found id: ""
	I1210 00:03:27.997650   83900 logs.go:282] 1 containers: [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9]
	I1210 00:03:27.997710   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.001497   83900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:28.001570   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:28.034858   83900 cri.go:89] found id: "db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:28.034883   83900 cri.go:89] found id: ""
	I1210 00:03:28.034891   83900 logs.go:282] 1 containers: [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71]
	I1210 00:03:28.034953   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.038775   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:28.038852   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:28.074473   83900 cri.go:89] found id: "f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:28.074497   83900 cri.go:89] found id: ""
	I1210 00:03:28.074506   83900 logs.go:282] 1 containers: [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de]
	I1210 00:03:28.074568   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.079082   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:28.079149   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:28.113948   83900 cri.go:89] found id: "a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:28.113971   83900 cri.go:89] found id: ""
	I1210 00:03:28.113981   83900 logs.go:282] 1 containers: [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994]
	I1210 00:03:28.114045   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.118930   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:28.118982   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:28.162794   83900 cri.go:89] found id: "a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:28.162820   83900 cri.go:89] found id: ""
	I1210 00:03:28.162827   83900 logs.go:282] 1 containers: [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538]
	I1210 00:03:28.162872   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.166637   83900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:28.166715   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:28.206591   83900 cri.go:89] found id: ""
	I1210 00:03:28.206619   83900 logs.go:282] 0 containers: []
	W1210 00:03:28.206627   83900 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:28.206633   83900 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:03:28.206693   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:03:28.251722   83900 cri.go:89] found id: "b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:28.251748   83900 cri.go:89] found id: "e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:28.251754   83900 cri.go:89] found id: ""
	I1210 00:03:28.251763   83900 logs.go:282] 2 containers: [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1]
	I1210 00:03:28.251821   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.256785   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.260355   83900 logs.go:123] Gathering logs for storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] ...
	I1210 00:03:28.260378   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:28.295801   83900 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:28.295827   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:28.734920   83900 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:28.734963   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:28.804266   83900 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:28.804305   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:28.818223   83900 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:28.818251   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:03:28.931008   83900 logs.go:123] Gathering logs for etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] ...
	I1210 00:03:28.931044   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:28.977544   83900 logs.go:123] Gathering logs for coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] ...
	I1210 00:03:28.977592   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:29.016645   83900 logs.go:123] Gathering logs for kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] ...
	I1210 00:03:29.016679   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:29.052920   83900 logs.go:123] Gathering logs for kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] ...
	I1210 00:03:29.052945   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:29.094464   83900 logs.go:123] Gathering logs for kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] ...
	I1210 00:03:29.094497   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:29.132520   83900 logs.go:123] Gathering logs for kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] ...
	I1210 00:03:29.132545   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:29.180364   83900 logs.go:123] Gathering logs for storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] ...
	I1210 00:03:29.180396   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:29.216627   83900 logs.go:123] Gathering logs for container status ...
	I1210 00:03:29.216657   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:26.774169   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:26.786888   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:26.786964   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:26.820391   84547 cri.go:89] found id: ""
	I1210 00:03:26.820423   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.820433   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:26.820441   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:26.820504   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:26.853927   84547 cri.go:89] found id: ""
	I1210 00:03:26.853955   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.853963   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:26.853971   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:26.854031   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:26.887309   84547 cri.go:89] found id: ""
	I1210 00:03:26.887337   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.887347   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:26.887353   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:26.887415   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:26.923869   84547 cri.go:89] found id: ""
	I1210 00:03:26.923897   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.923908   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:26.923915   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:26.923983   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:26.959919   84547 cri.go:89] found id: ""
	I1210 00:03:26.959952   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.959964   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:26.959971   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:26.960032   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:26.993742   84547 cri.go:89] found id: ""
	I1210 00:03:26.993775   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.993787   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:26.993794   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:26.993853   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:27.026977   84547 cri.go:89] found id: ""
	I1210 00:03:27.027011   84547 logs.go:282] 0 containers: []
	W1210 00:03:27.027021   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:27.027028   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:27.027090   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:27.064135   84547 cri.go:89] found id: ""
	I1210 00:03:27.064164   84547 logs.go:282] 0 containers: []
	W1210 00:03:27.064173   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:27.064181   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:27.064192   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:27.101758   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:27.101784   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:27.155536   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:27.155592   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:27.168891   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:27.168914   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:27.242486   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:27.242511   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:27.242525   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:29.821740   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:29.834536   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:29.834604   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:29.868316   84547 cri.go:89] found id: ""
	I1210 00:03:29.868341   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.868348   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:29.868354   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:29.868402   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:29.902290   84547 cri.go:89] found id: ""
	I1210 00:03:29.902320   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.902331   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:29.902338   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:29.902399   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:29.948750   84547 cri.go:89] found id: ""
	I1210 00:03:29.948780   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.948792   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:29.948800   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:29.948864   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:29.994587   84547 cri.go:89] found id: ""
	I1210 00:03:29.994618   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.994629   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:29.994636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:29.994694   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:30.042216   84547 cri.go:89] found id: ""
	I1210 00:03:30.042243   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.042256   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:30.042264   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:30.042345   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:30.075964   84547 cri.go:89] found id: ""
	I1210 00:03:30.075989   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.075999   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:30.076007   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:30.076072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:30.108647   84547 cri.go:89] found id: ""
	I1210 00:03:30.108676   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.108687   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:30.108695   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:30.108760   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:30.140983   84547 cri.go:89] found id: ""
	I1210 00:03:30.141013   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.141022   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:30.141030   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:30.141040   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:30.198281   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:30.198319   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:30.213820   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:30.213849   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:30.283907   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:30.283927   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:30.283940   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:30.360731   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:30.360768   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:27.197512   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:29.698238   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:31.698679   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:28.757444   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:31.255481   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:31.759061   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1210 00:03:31.763064   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 200:
	ok
	I1210 00:03:31.763974   83900 api_server.go:141] control plane version: v1.31.2
	I1210 00:03:31.763992   83900 api_server.go:131] duration metric: took 3.83888731s to wait for apiserver health ...
	I1210 00:03:31.763999   83900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:03:31.764021   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:31.764077   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:31.798282   83900 cri.go:89] found id: "07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:31.798312   83900 cri.go:89] found id: ""
	I1210 00:03:31.798320   83900 logs.go:282] 1 containers: [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7]
	I1210 00:03:31.798375   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.802661   83900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:31.802726   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:31.837861   83900 cri.go:89] found id: "c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:31.837888   83900 cri.go:89] found id: ""
	I1210 00:03:31.837896   83900 logs.go:282] 1 containers: [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9]
	I1210 00:03:31.837943   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.842139   83900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:31.842224   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:31.876940   83900 cri.go:89] found id: "db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:31.876967   83900 cri.go:89] found id: ""
	I1210 00:03:31.876977   83900 logs.go:282] 1 containers: [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71]
	I1210 00:03:31.877043   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.881416   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:31.881491   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:31.915449   83900 cri.go:89] found id: "f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:31.915481   83900 cri.go:89] found id: ""
	I1210 00:03:31.915491   83900 logs.go:282] 1 containers: [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de]
	I1210 00:03:31.915575   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.919342   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:31.919400   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:31.954554   83900 cri.go:89] found id: "a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:31.954583   83900 cri.go:89] found id: ""
	I1210 00:03:31.954592   83900 logs.go:282] 1 containers: [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994]
	I1210 00:03:31.954641   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.958484   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:31.958569   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:32.000361   83900 cri.go:89] found id: "a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:32.000390   83900 cri.go:89] found id: ""
	I1210 00:03:32.000401   83900 logs.go:282] 1 containers: [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538]
	I1210 00:03:32.000462   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:32.004339   83900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:32.004400   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:32.043231   83900 cri.go:89] found id: ""
	I1210 00:03:32.043254   83900 logs.go:282] 0 containers: []
	W1210 00:03:32.043279   83900 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:32.043285   83900 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:03:32.043334   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:03:32.079572   83900 cri.go:89] found id: "b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:32.079597   83900 cri.go:89] found id: "e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:32.079608   83900 cri.go:89] found id: ""
	I1210 00:03:32.079615   83900 logs.go:282] 2 containers: [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1]
	I1210 00:03:32.079661   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:32.083526   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:32.087101   83900 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:32.087126   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:32.494977   83900 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:32.495021   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:32.509299   83900 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:32.509342   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:03:32.612029   83900 logs.go:123] Gathering logs for kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] ...
	I1210 00:03:32.612066   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:32.656868   83900 logs.go:123] Gathering logs for kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] ...
	I1210 00:03:32.656900   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:32.695347   83900 logs.go:123] Gathering logs for storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] ...
	I1210 00:03:32.695376   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:32.732071   83900 logs.go:123] Gathering logs for storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] ...
	I1210 00:03:32.732100   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:32.767692   83900 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:32.767718   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:32.836047   83900 logs.go:123] Gathering logs for etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] ...
	I1210 00:03:32.836088   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:32.887136   83900 logs.go:123] Gathering logs for coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] ...
	I1210 00:03:32.887175   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:32.929836   83900 logs.go:123] Gathering logs for kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] ...
	I1210 00:03:32.929873   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:32.972459   83900 logs.go:123] Gathering logs for kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] ...
	I1210 00:03:32.972492   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:33.029387   83900 logs.go:123] Gathering logs for container status ...
	I1210 00:03:33.029415   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:32.904302   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:32.919123   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:32.919209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:32.952847   84547 cri.go:89] found id: ""
	I1210 00:03:32.952879   84547 logs.go:282] 0 containers: []
	W1210 00:03:32.952889   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:32.952897   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:32.952961   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:32.986993   84547 cri.go:89] found id: ""
	I1210 00:03:32.987020   84547 logs.go:282] 0 containers: []
	W1210 00:03:32.987029   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:32.987035   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:32.987085   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:33.027506   84547 cri.go:89] found id: ""
	I1210 00:03:33.027536   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.027548   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:33.027556   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:33.027630   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:33.061553   84547 cri.go:89] found id: ""
	I1210 00:03:33.061588   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.061605   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:33.061613   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:33.061673   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:33.097640   84547 cri.go:89] found id: ""
	I1210 00:03:33.097679   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.097693   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:33.097702   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:33.097783   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:33.131725   84547 cri.go:89] found id: ""
	I1210 00:03:33.131758   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.131768   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:33.131775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:33.131839   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:33.165731   84547 cri.go:89] found id: ""
	I1210 00:03:33.165759   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.165771   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:33.165779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:33.165841   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:33.200288   84547 cri.go:89] found id: ""
	I1210 00:03:33.200310   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.200320   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:33.200338   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:33.200351   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:33.251524   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:33.251577   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:33.266197   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:33.266223   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:33.343529   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:33.343583   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:33.343600   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:33.420133   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:33.420175   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:35.577482   83900 system_pods.go:59] 8 kube-system pods found
	I1210 00:03:35.577511   83900 system_pods.go:61] "coredns-7c65d6cfc9-qvtlr" [1ef5707d-5f06-46d0-809b-da79f79c49e5] Running
	I1210 00:03:35.577516   83900 system_pods.go:61] "etcd-embed-certs-825613" [3213929f-0200-4e7d-9b69-967463bfc194] Running
	I1210 00:03:35.577520   83900 system_pods.go:61] "kube-apiserver-embed-certs-825613" [d64b8c55-7da1-4b8e-864e-0727676b17a1] Running
	I1210 00:03:35.577524   83900 system_pods.go:61] "kube-controller-manager-embed-certs-825613" [61372146-3fe1-4f4e-9d8e-d85cc5ac8369] Running
	I1210 00:03:35.577527   83900 system_pods.go:61] "kube-proxy-rn6fg" [6db02558-bfa6-4c5f-a120-aed13575b273] Running
	I1210 00:03:35.577529   83900 system_pods.go:61] "kube-scheduler-embed-certs-825613" [b9c75ecd-add3-4a19-86e0-08326eea0f6b] Running
	I1210 00:03:35.577537   83900 system_pods.go:61] "metrics-server-6867b74b74-hg7c5" [2a657b1b-4435-42b5-aef2-deebf7865c83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:03:35.577542   83900 system_pods.go:61] "storage-provisioner" [e5cabae9-bb71-4b5e-9a43-0dce4a0733a1] Running
	I1210 00:03:35.577549   83900 system_pods.go:74] duration metric: took 3.81354476s to wait for pod list to return data ...
	I1210 00:03:35.577556   83900 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:03:35.579951   83900 default_sa.go:45] found service account: "default"
	I1210 00:03:35.579971   83900 default_sa.go:55] duration metric: took 2.410307ms for default service account to be created ...
	I1210 00:03:35.579977   83900 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:03:35.584102   83900 system_pods.go:86] 8 kube-system pods found
	I1210 00:03:35.584126   83900 system_pods.go:89] "coredns-7c65d6cfc9-qvtlr" [1ef5707d-5f06-46d0-809b-da79f79c49e5] Running
	I1210 00:03:35.584131   83900 system_pods.go:89] "etcd-embed-certs-825613" [3213929f-0200-4e7d-9b69-967463bfc194] Running
	I1210 00:03:35.584136   83900 system_pods.go:89] "kube-apiserver-embed-certs-825613" [d64b8c55-7da1-4b8e-864e-0727676b17a1] Running
	I1210 00:03:35.584142   83900 system_pods.go:89] "kube-controller-manager-embed-certs-825613" [61372146-3fe1-4f4e-9d8e-d85cc5ac8369] Running
	I1210 00:03:35.584148   83900 system_pods.go:89] "kube-proxy-rn6fg" [6db02558-bfa6-4c5f-a120-aed13575b273] Running
	I1210 00:03:35.584153   83900 system_pods.go:89] "kube-scheduler-embed-certs-825613" [b9c75ecd-add3-4a19-86e0-08326eea0f6b] Running
	I1210 00:03:35.584163   83900 system_pods.go:89] "metrics-server-6867b74b74-hg7c5" [2a657b1b-4435-42b5-aef2-deebf7865c83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:03:35.584168   83900 system_pods.go:89] "storage-provisioner" [e5cabae9-bb71-4b5e-9a43-0dce4a0733a1] Running
	I1210 00:03:35.584181   83900 system_pods.go:126] duration metric: took 4.196356ms to wait for k8s-apps to be running ...
	I1210 00:03:35.584192   83900 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:03:35.584235   83900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:03:35.598192   83900 system_svc.go:56] duration metric: took 13.993491ms WaitForService to wait for kubelet
	I1210 00:03:35.598218   83900 kubeadm.go:582] duration metric: took 4m25.529459505s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:03:35.598238   83900 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:03:35.600992   83900 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:03:35.601012   83900 node_conditions.go:123] node cpu capacity is 2
	I1210 00:03:35.601025   83900 node_conditions.go:105] duration metric: took 2.782415ms to run NodePressure ...
	I1210 00:03:35.601038   83900 start.go:241] waiting for startup goroutines ...
	I1210 00:03:35.601049   83900 start.go:246] waiting for cluster config update ...
	I1210 00:03:35.601063   83900 start.go:255] writing updated cluster config ...
	I1210 00:03:35.601360   83900 ssh_runner.go:195] Run: rm -f paused
	I1210 00:03:35.647487   83900 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:03:35.650508   83900 out.go:177] * Done! kubectl is now configured to use "embed-certs-825613" cluster and "default" namespace by default
	I1210 00:03:34.199682   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:36.696731   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:33.258255   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:35.756802   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:35.971411   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:35.988993   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:35.989059   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:36.022639   84547 cri.go:89] found id: ""
	I1210 00:03:36.022673   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.022684   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:36.022692   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:36.022758   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:36.055421   84547 cri.go:89] found id: ""
	I1210 00:03:36.055452   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.055461   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:36.055466   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:36.055514   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:36.087690   84547 cri.go:89] found id: ""
	I1210 00:03:36.087721   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.087731   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:36.087738   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:36.087802   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:36.125209   84547 cri.go:89] found id: ""
	I1210 00:03:36.125240   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.125249   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:36.125254   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:36.125304   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:36.157544   84547 cri.go:89] found id: ""
	I1210 00:03:36.157586   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.157599   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:36.157607   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:36.157676   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:36.193415   84547 cri.go:89] found id: ""
	I1210 00:03:36.193448   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.193459   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:36.193466   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:36.193525   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:36.228941   84547 cri.go:89] found id: ""
	I1210 00:03:36.228971   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.228982   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:36.228989   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:36.229052   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:36.264821   84547 cri.go:89] found id: ""
	I1210 00:03:36.264851   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.264862   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:36.264873   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:36.264887   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:36.314841   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:36.314881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:36.328664   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:36.328695   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:36.402929   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:36.402956   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:36.402970   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:36.480270   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:36.480306   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:39.022748   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:39.036323   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:39.036382   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:39.070582   84547 cri.go:89] found id: ""
	I1210 00:03:39.070608   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.070616   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:39.070622   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:39.070690   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:39.108783   84547 cri.go:89] found id: ""
	I1210 00:03:39.108814   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.108825   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:39.108832   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:39.108889   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:39.142300   84547 cri.go:89] found id: ""
	I1210 00:03:39.142330   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.142338   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:39.142344   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:39.142400   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:39.177859   84547 cri.go:89] found id: ""
	I1210 00:03:39.177891   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.177903   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:39.177911   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:39.177965   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:39.212109   84547 cri.go:89] found id: ""
	I1210 00:03:39.212140   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.212152   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:39.212160   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:39.212209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:39.245451   84547 cri.go:89] found id: ""
	I1210 00:03:39.245490   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.245501   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:39.245509   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:39.245569   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:39.278925   84547 cri.go:89] found id: ""
	I1210 00:03:39.278957   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.278967   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:39.278974   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:39.279063   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:39.312322   84547 cri.go:89] found id: ""
	I1210 00:03:39.312350   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.312358   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:39.312367   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:39.312377   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:39.362844   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:39.362882   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:39.375938   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:39.375967   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:39.443649   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:39.443676   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:39.443691   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:39.523722   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:39.523757   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:38.698105   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:41.198814   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:38.256567   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:40.257434   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:42.062317   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:42.075094   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:42.075157   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:42.106719   84547 cri.go:89] found id: ""
	I1210 00:03:42.106744   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.106751   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:42.106757   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:42.106805   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:42.141977   84547 cri.go:89] found id: ""
	I1210 00:03:42.142011   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.142022   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:42.142029   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:42.142089   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:42.175117   84547 cri.go:89] found id: ""
	I1210 00:03:42.175151   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.175164   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:42.175172   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:42.175243   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:42.209871   84547 cri.go:89] found id: ""
	I1210 00:03:42.209900   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.209911   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:42.209919   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:42.209982   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:42.252784   84547 cri.go:89] found id: ""
	I1210 00:03:42.252812   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.252822   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:42.252830   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:42.252891   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:42.285934   84547 cri.go:89] found id: ""
	I1210 00:03:42.285961   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.285973   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:42.285980   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:42.286040   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:42.327393   84547 cri.go:89] found id: ""
	I1210 00:03:42.327423   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.327433   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:42.327440   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:42.327498   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:42.362244   84547 cri.go:89] found id: ""
	I1210 00:03:42.362279   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.362289   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:42.362302   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:42.362317   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:42.441656   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:42.441691   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:42.479357   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:42.479397   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:42.533002   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:42.533038   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:42.546485   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:42.546514   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:42.616912   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:45.117156   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:45.131265   84547 kubeadm.go:597] duration metric: took 4m2.157458218s to restartPrimaryControlPlane
	W1210 00:03:45.131350   84547 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 00:03:45.131379   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:03:43.699026   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:46.197420   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:42.756686   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:44.251077   84259 pod_ready.go:82] duration metric: took 4m0.000996899s for pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace to be "Ready" ...
	E1210 00:03:44.251112   84259 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 00:03:44.251136   84259 pod_ready.go:39] duration metric: took 4m13.15350289s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:03:44.251170   84259 kubeadm.go:597] duration metric: took 4m23.227405415s to restartPrimaryControlPlane
	W1210 00:03:44.251238   84259 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 00:03:44.251368   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:03:46.778025   84547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.646620044s)
	I1210 00:03:46.778099   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:03:46.792384   84547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:03:46.802897   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:03:46.813393   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:03:46.813417   84547 kubeadm.go:157] found existing configuration files:
	
	I1210 00:03:46.813473   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:03:46.823016   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:03:46.823093   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:03:46.832912   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:03:46.841961   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:03:46.842019   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:03:46.851160   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:03:46.859879   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:03:46.859938   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:03:46.869192   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:03:46.878423   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:03:46.878487   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:03:46.888103   84547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:03:47.105146   84547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:03:48.698211   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:51.197463   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:53.198578   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:55.697251   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:57.698289   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:00.198291   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:02.696926   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:04.697431   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:06.698260   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:10.270952   84259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.019545181s)
	I1210 00:04:10.271025   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:04:10.292112   84259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:04:10.307538   84259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:04:10.325375   84259 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:04:10.325406   84259 kubeadm.go:157] found existing configuration files:
	
	I1210 00:04:10.325465   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 00:04:10.338892   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:04:10.338960   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:04:10.353888   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 00:04:10.364787   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:04:10.364852   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:04:10.374513   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 00:04:10.393430   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:04:10.393486   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:04:10.403621   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 00:04:10.412759   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:04:10.412824   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:04:10.422153   84259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:04:10.468789   84259 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 00:04:10.468852   84259 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:04:10.566712   84259 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:04:10.566840   84259 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:04:10.566939   84259 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 00:04:10.574253   84259 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:04:10.576376   84259 out.go:235]   - Generating certificates and keys ...
	I1210 00:04:10.576485   84259 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:04:10.576566   84259 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:04:10.576722   84259 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:04:10.576816   84259 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:04:10.576915   84259 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:04:10.576991   84259 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:04:10.577107   84259 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:04:10.577214   84259 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:04:10.577319   84259 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:04:10.577433   84259 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:04:10.577522   84259 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:04:10.577616   84259 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:04:10.870887   84259 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:04:10.977055   84259 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 00:04:11.160320   84259 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:04:11.207864   84259 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:04:11.315037   84259 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:04:11.315715   84259 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:04:11.319135   84259 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:04:09.197254   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:11.197831   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:11.321063   84259 out.go:235]   - Booting up control plane ...
	I1210 00:04:11.321193   84259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:04:11.321296   84259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:04:11.322134   84259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:04:11.341567   84259 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:04:11.347658   84259 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:04:11.347736   84259 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:04:11.492127   84259 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 00:04:11.492293   84259 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 00:04:11.994410   84259 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.01471ms
	I1210 00:04:11.994533   84259 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 00:04:13.697731   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:16.198771   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:16.998638   84259 kubeadm.go:310] [api-check] The API server is healthy after 5.002934845s
	I1210 00:04:17.009930   84259 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 00:04:17.025483   84259 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 00:04:17.059445   84259 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 00:04:17.059762   84259 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-871210 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 00:04:17.071665   84259 kubeadm.go:310] [bootstrap-token] Using token: 1x1r34.gs3p33sqgju9dylj
	I1210 00:04:17.073936   84259 out.go:235]   - Configuring RBAC rules ...
	I1210 00:04:17.074055   84259 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 00:04:17.079920   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 00:04:17.090408   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 00:04:17.094072   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 00:04:17.096951   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 00:04:17.099929   84259 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 00:04:17.404431   84259 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 00:04:17.837125   84259 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 00:04:18.404721   84259 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 00:04:18.405661   84259 kubeadm.go:310] 
	I1210 00:04:18.405757   84259 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 00:04:18.405770   84259 kubeadm.go:310] 
	I1210 00:04:18.405871   84259 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 00:04:18.405883   84259 kubeadm.go:310] 
	I1210 00:04:18.405916   84259 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 00:04:18.406012   84259 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 00:04:18.406101   84259 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 00:04:18.406111   84259 kubeadm.go:310] 
	I1210 00:04:18.406197   84259 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 00:04:18.406209   84259 kubeadm.go:310] 
	I1210 00:04:18.406318   84259 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 00:04:18.406329   84259 kubeadm.go:310] 
	I1210 00:04:18.406412   84259 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 00:04:18.406482   84259 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 00:04:18.406544   84259 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 00:04:18.406550   84259 kubeadm.go:310] 
	I1210 00:04:18.406643   84259 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 00:04:18.406787   84259 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 00:04:18.406802   84259 kubeadm.go:310] 
	I1210 00:04:18.406919   84259 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 1x1r34.gs3p33sqgju9dylj \
	I1210 00:04:18.407089   84259 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b \
	I1210 00:04:18.407117   84259 kubeadm.go:310] 	--control-plane 
	I1210 00:04:18.407123   84259 kubeadm.go:310] 
	I1210 00:04:18.407207   84259 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 00:04:18.407214   84259 kubeadm.go:310] 
	I1210 00:04:18.407300   84259 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 1x1r34.gs3p33sqgju9dylj \
	I1210 00:04:18.407433   84259 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b 
	I1210 00:04:18.408197   84259 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:04:18.408272   84259 cni.go:84] Creating CNI manager for ""
	I1210 00:04:18.408289   84259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:04:18.410841   84259 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:04:18.697285   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:20.698266   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:18.412209   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:04:18.422123   84259 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:04:18.440972   84259 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:04:18.441058   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:18.441138   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-871210 minikube.k8s.io/updated_at=2024_12_10T00_04_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=default-k8s-diff-port-871210 minikube.k8s.io/primary=true
	I1210 00:04:18.462185   84259 ops.go:34] apiserver oom_adj: -16
	I1210 00:04:18.625855   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:19.126760   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:19.626829   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:20.126663   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:20.626893   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:21.126851   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:21.625892   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:22.126904   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:22.626450   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:22.715909   84259 kubeadm.go:1113] duration metric: took 4.274924102s to wait for elevateKubeSystemPrivileges
	I1210 00:04:22.715958   84259 kubeadm.go:394] duration metric: took 5m1.740971462s to StartCluster
	I1210 00:04:22.715982   84259 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:04:22.716080   84259 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1210 00:04:22.718538   84259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:04:22.718890   84259 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:04:22.718958   84259 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:04:22.719059   84259 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-871210"
	I1210 00:04:22.719079   84259 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-871210"
	W1210 00:04:22.719091   84259 addons.go:243] addon storage-provisioner should already be in state true
	I1210 00:04:22.719100   84259 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-871210"
	I1210 00:04:22.719123   84259 host.go:66] Checking if "default-k8s-diff-port-871210" exists ...
	I1210 00:04:22.719120   84259 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-871210"
	I1210 00:04:22.719169   84259 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-871210"
	W1210 00:04:22.719184   84259 addons.go:243] addon metrics-server should already be in state true
	I1210 00:04:22.719216   84259 host.go:66] Checking if "default-k8s-diff-port-871210" exists ...
	I1210 00:04:22.719133   84259 config.go:182] Loaded profile config "default-k8s-diff-port-871210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:04:22.719134   84259 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-871210"
	I1210 00:04:22.719582   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.719631   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.719717   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.719728   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.719753   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.719763   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.720519   84259 out.go:177] * Verifying Kubernetes components...
	I1210 00:04:22.722140   84259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:04:22.735592   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44099
	I1210 00:04:22.735716   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36311
	I1210 00:04:22.736075   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.736100   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.736605   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.736626   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.736605   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.736642   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.736995   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.737007   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.737217   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.737595   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.737639   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.739278   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41255
	I1210 00:04:22.741315   84259 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-871210"
	W1210 00:04:22.741337   84259 addons.go:243] addon default-storageclass should already be in state true
	I1210 00:04:22.741367   84259 host.go:66] Checking if "default-k8s-diff-port-871210" exists ...
	I1210 00:04:22.741739   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.741778   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.742259   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.742829   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.742858   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.743182   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.743632   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.743663   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.754879   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1210 00:04:22.755310   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.756015   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.756032   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.756397   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.756576   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.758535   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1210 00:04:22.759514   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33065
	I1210 00:04:22.759891   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.759925   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39821
	I1210 00:04:22.760309   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.760330   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.760346   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.760435   84259 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 00:04:22.760689   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.760831   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.760845   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.760860   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.761166   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.761589   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.761627   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.761737   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 00:04:22.761754   84259 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 00:04:22.761774   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1210 00:04:22.762882   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1210 00:04:22.764440   84259 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:04:22.764810   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.765316   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1210 00:04:22.765339   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.765474   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1210 00:04:22.765675   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1210 00:04:22.765828   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1210 00:04:22.765894   84259 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:04:22.765912   84259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:04:22.765919   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1210 00:04:22.765934   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1210 00:04:22.768979   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.769446   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1210 00:04:22.769498   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.769626   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1210 00:04:22.769832   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1210 00:04:22.769956   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1210 00:04:22.770078   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1210 00:04:22.780995   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45841
	I1210 00:04:22.781451   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.782077   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.782098   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.782467   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.782705   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.784771   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1210 00:04:22.785015   84259 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:04:22.785030   84259 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:04:22.785047   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1210 00:04:22.788208   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.788659   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1210 00:04:22.788690   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.788771   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1210 00:04:22.789148   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1210 00:04:22.789275   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1210 00:04:22.789386   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1210 00:04:22.926076   84259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:04:22.945059   84259 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-871210" to be "Ready" ...
	I1210 00:04:22.970684   84259 node_ready.go:49] node "default-k8s-diff-port-871210" has status "Ready":"True"
	I1210 00:04:22.970727   84259 node_ready.go:38] duration metric: took 25.618738ms for node "default-k8s-diff-port-871210" to be "Ready" ...
	I1210 00:04:22.970740   84259 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:04:22.989661   84259 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:23.045411   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 00:04:23.045433   84259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 00:04:23.074907   84259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:04:23.076857   84259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:04:23.094809   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 00:04:23.094836   84259 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 00:04:23.125107   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:04:23.125136   84259 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 00:04:23.174286   84259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:04:23.554521   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554556   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.554568   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554589   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.554855   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.554871   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.554880   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554882   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.554888   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.554893   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.554891   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.554903   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554913   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.555087   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.555099   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.555283   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.555354   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.555379   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.579741   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.579775   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.580075   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.580091   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.580113   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.776674   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.776701   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.777020   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.777060   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.777068   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.777076   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.777089   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.777314   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.777330   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.777347   84259 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-871210"
	I1210 00:04:23.778992   84259 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 00:04:23.198263   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:25.198299   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:23.780354   84259 addons.go:510] duration metric: took 1.061403814s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 00:04:25.012490   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:27.495987   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:27.697345   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:29.698123   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:31.692183   83859 pod_ready.go:82] duration metric: took 4m0.000797786s for pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace to be "Ready" ...
	E1210 00:04:31.692211   83859 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 00:04:31.692228   83859 pod_ready.go:39] duration metric: took 4m11.541153015s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:04:31.692258   83859 kubeadm.go:597] duration metric: took 4m19.334550967s to restartPrimaryControlPlane
	W1210 00:04:31.692305   83859 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 00:04:31.692329   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:04:29.995452   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:31.997927   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:34.496689   84259 pod_ready.go:93] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.496716   84259 pod_ready.go:82] duration metric: took 11.507027662s for pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.496730   84259 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.501166   84259 pod_ready.go:93] pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.501188   84259 pod_ready.go:82] duration metric: took 4.45016ms for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.501198   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.506061   84259 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.506085   84259 pod_ready.go:82] duration metric: took 4.881919ms for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.506096   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.510401   84259 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.510422   84259 pod_ready.go:82] duration metric: took 4.320761ms for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.510432   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pj85d" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.514319   84259 pod_ready.go:93] pod "kube-proxy-pj85d" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.514340   84259 pod_ready.go:82] duration metric: took 3.902541ms for pod "kube-proxy-pj85d" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.514348   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.896369   84259 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.896393   84259 pod_ready.go:82] duration metric: took 382.038726ms for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.896401   84259 pod_ready.go:39] duration metric: took 11.925650242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:04:34.896415   84259 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:04:34.896466   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:04:34.910589   84259 api_server.go:72] duration metric: took 12.191654524s to wait for apiserver process to appear ...
	I1210 00:04:34.910617   84259 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:04:34.910639   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1210 00:04:34.916503   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I1210 00:04:34.917955   84259 api_server.go:141] control plane version: v1.31.2
	I1210 00:04:34.917979   84259 api_server.go:131] duration metric: took 7.355946ms to wait for apiserver health ...
	I1210 00:04:34.917987   84259 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:04:35.098220   84259 system_pods.go:59] 9 kube-system pods found
	I1210 00:04:35.098252   84259 system_pods.go:61] "coredns-7c65d6cfc9-7xpcc" [6cff7e56-1785-41c8-bd9c-db9d3f0bd05f] Running
	I1210 00:04:35.098259   84259 system_pods.go:61] "coredns-7c65d6cfc9-z2n25" [b6b81952-7281-4705-9536-06eb939a5807] Running
	I1210 00:04:35.098265   84259 system_pods.go:61] "etcd-default-k8s-diff-port-871210" [e9c747a1-72c0-4b30-b401-c39a41fe5eb5] Running
	I1210 00:04:35.098271   84259 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-871210" [1a27b619-b576-4a36-b88a-0c498ad38628] Running
	I1210 00:04:35.098276   84259 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-871210" [d22a69b7-3389-46a9-9ca5-a3101aa34e05] Running
	I1210 00:04:35.098280   84259 system_pods.go:61] "kube-proxy-pj85d" [d1b9b056-f4a3-419c-86fa-a94d88464f74] Running
	I1210 00:04:35.098285   84259 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-871210" [50a03a5c-d0e5-454a-9a7b-49963ff59d06] Running
	I1210 00:04:35.098294   84259 system_pods.go:61] "metrics-server-6867b74b74-7g2qm" [49ac129a-c85d-4af1-b3b2-06bc10bced77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:04:35.098298   84259 system_pods.go:61] "storage-provisioner" [ea716edd-4030-4ec3-b094-c3a50154b473] Running
	I1210 00:04:35.098309   84259 system_pods.go:74] duration metric: took 180.315426ms to wait for pod list to return data ...
	I1210 00:04:35.098322   84259 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:04:35.294342   84259 default_sa.go:45] found service account: "default"
	I1210 00:04:35.294367   84259 default_sa.go:55] duration metric: took 196.039183ms for default service account to be created ...
	I1210 00:04:35.294376   84259 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:04:35.497331   84259 system_pods.go:86] 9 kube-system pods found
	I1210 00:04:35.497364   84259 system_pods.go:89] "coredns-7c65d6cfc9-7xpcc" [6cff7e56-1785-41c8-bd9c-db9d3f0bd05f] Running
	I1210 00:04:35.497369   84259 system_pods.go:89] "coredns-7c65d6cfc9-z2n25" [b6b81952-7281-4705-9536-06eb939a5807] Running
	I1210 00:04:35.497373   84259 system_pods.go:89] "etcd-default-k8s-diff-port-871210" [e9c747a1-72c0-4b30-b401-c39a41fe5eb5] Running
	I1210 00:04:35.497377   84259 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-871210" [1a27b619-b576-4a36-b88a-0c498ad38628] Running
	I1210 00:04:35.497381   84259 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-871210" [d22a69b7-3389-46a9-9ca5-a3101aa34e05] Running
	I1210 00:04:35.497384   84259 system_pods.go:89] "kube-proxy-pj85d" [d1b9b056-f4a3-419c-86fa-a94d88464f74] Running
	I1210 00:04:35.497387   84259 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-871210" [50a03a5c-d0e5-454a-9a7b-49963ff59d06] Running
	I1210 00:04:35.497396   84259 system_pods.go:89] "metrics-server-6867b74b74-7g2qm" [49ac129a-c85d-4af1-b3b2-06bc10bced77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:04:35.497400   84259 system_pods.go:89] "storage-provisioner" [ea716edd-4030-4ec3-b094-c3a50154b473] Running
	I1210 00:04:35.497409   84259 system_pods.go:126] duration metric: took 203.02694ms to wait for k8s-apps to be running ...
	I1210 00:04:35.497416   84259 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:04:35.497456   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:04:35.512217   84259 system_svc.go:56] duration metric: took 14.792056ms WaitForService to wait for kubelet
	I1210 00:04:35.512248   84259 kubeadm.go:582] duration metric: took 12.793318604s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:04:35.512274   84259 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:04:35.695292   84259 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:04:35.695318   84259 node_conditions.go:123] node cpu capacity is 2
	I1210 00:04:35.695329   84259 node_conditions.go:105] duration metric: took 183.048181ms to run NodePressure ...
	I1210 00:04:35.695341   84259 start.go:241] waiting for startup goroutines ...
	I1210 00:04:35.695348   84259 start.go:246] waiting for cluster config update ...
	I1210 00:04:35.695361   84259 start.go:255] writing updated cluster config ...
	I1210 00:04:35.695666   84259 ssh_runner.go:195] Run: rm -f paused
	I1210 00:04:35.742539   84259 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:04:35.744394   84259 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-871210" cluster and "default" namespace by default
	I1210 00:04:57.851348   83859 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.159001958s)
	I1210 00:04:57.851413   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:04:57.866601   83859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:04:57.876643   83859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:04:57.886172   83859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:04:57.886200   83859 kubeadm.go:157] found existing configuration files:
	
	I1210 00:04:57.886252   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:04:57.895643   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:04:57.895722   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:04:57.905397   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:04:57.914236   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:04:57.914299   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:04:57.923225   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:04:57.932422   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:04:57.932476   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:04:57.942840   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:04:57.952087   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:04:57.952159   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:04:57.961371   83859 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:04:58.005314   83859 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 00:04:58.005444   83859 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:04:58.098287   83859 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:04:58.098431   83859 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:04:58.098591   83859 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 00:04:58.106525   83859 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:04:58.109142   83859 out.go:235]   - Generating certificates and keys ...
	I1210 00:04:58.109219   83859 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:04:58.109320   83859 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:04:58.109456   83859 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:04:58.109536   83859 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:04:58.109637   83859 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:04:58.109728   83859 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:04:58.109840   83859 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:04:58.109940   83859 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:04:58.110047   83859 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:04:58.110152   83859 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:04:58.110225   83859 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:04:58.110296   83859 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:04:58.357649   83859 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:04:58.505840   83859 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 00:04:58.890560   83859 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:04:58.965928   83859 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:04:59.341665   83859 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:04:59.342240   83859 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:04:59.344644   83859 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:04:59.346225   83859 out.go:235]   - Booting up control plane ...
	I1210 00:04:59.346353   83859 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:04:59.348071   83859 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:04:59.348893   83859 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:04:59.366824   83859 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:04:59.372906   83859 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:04:59.372962   83859 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:04:59.509554   83859 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 00:04:59.509695   83859 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 00:05:00.014472   83859 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.085309ms
	I1210 00:05:00.014639   83859 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 00:05:05.017162   83859 kubeadm.go:310] [api-check] The API server is healthy after 5.002677832s
	I1210 00:05:05.029078   83859 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 00:05:05.048524   83859 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 00:05:05.080458   83859 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 00:05:05.080735   83859 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-048296 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 00:05:05.091793   83859 kubeadm.go:310] [bootstrap-token] Using token: y5jzuu.af7syybsclcivlzq
	I1210 00:05:05.093253   83859 out.go:235]   - Configuring RBAC rules ...
	I1210 00:05:05.093401   83859 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 00:05:05.098842   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 00:05:05.106341   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 00:05:05.109935   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 00:05:05.114403   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 00:05:05.121436   83859 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 00:05:05.424690   83859 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 00:05:05.851096   83859 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 00:05:06.425724   83859 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 00:05:06.426713   83859 kubeadm.go:310] 
	I1210 00:05:06.426785   83859 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 00:05:06.426808   83859 kubeadm.go:310] 
	I1210 00:05:06.426904   83859 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 00:05:06.426936   83859 kubeadm.go:310] 
	I1210 00:05:06.426981   83859 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 00:05:06.427061   83859 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 00:05:06.427110   83859 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 00:05:06.427120   83859 kubeadm.go:310] 
	I1210 00:05:06.427199   83859 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 00:05:06.427210   83859 kubeadm.go:310] 
	I1210 00:05:06.427282   83859 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 00:05:06.427294   83859 kubeadm.go:310] 
	I1210 00:05:06.427381   83859 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 00:05:06.427486   83859 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 00:05:06.427623   83859 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 00:05:06.427635   83859 kubeadm.go:310] 
	I1210 00:05:06.427757   83859 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 00:05:06.427874   83859 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 00:05:06.427905   83859 kubeadm.go:310] 
	I1210 00:05:06.428032   83859 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y5jzuu.af7syybsclcivlzq \
	I1210 00:05:06.428167   83859 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b \
	I1210 00:05:06.428201   83859 kubeadm.go:310] 	--control-plane 
	I1210 00:05:06.428211   83859 kubeadm.go:310] 
	I1210 00:05:06.428322   83859 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 00:05:06.428332   83859 kubeadm.go:310] 
	I1210 00:05:06.428438   83859 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y5jzuu.af7syybsclcivlzq \
	I1210 00:05:06.428572   83859 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b 
	I1210 00:05:06.428746   83859 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:05:06.428832   83859 cni.go:84] Creating CNI manager for ""
	I1210 00:05:06.428849   83859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:05:06.431674   83859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:05:06.433006   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:05:06.444058   83859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:05:06.462707   83859 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:05:06.462838   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:06.462873   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-048296 minikube.k8s.io/updated_at=2024_12_10T00_05_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=no-preload-048296 minikube.k8s.io/primary=true
	I1210 00:05:06.493379   83859 ops.go:34] apiserver oom_adj: -16
	I1210 00:05:06.666080   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:07.166762   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:07.666408   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:08.166269   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:08.666734   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:09.166797   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:09.666522   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:10.166230   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:10.667061   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:10.774149   83859 kubeadm.go:1113] duration metric: took 4.311371383s to wait for elevateKubeSystemPrivileges
	I1210 00:05:10.774188   83859 kubeadm.go:394] duration metric: took 4m58.464009996s to StartCluster
	I1210 00:05:10.774211   83859 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:05:10.774290   83859 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1210 00:05:10.777711   83859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:05:10.778040   83859 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:05:10.778133   83859 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:05:10.778230   83859 addons.go:69] Setting storage-provisioner=true in profile "no-preload-048296"
	I1210 00:05:10.778247   83859 addons.go:234] Setting addon storage-provisioner=true in "no-preload-048296"
	W1210 00:05:10.778255   83859 addons.go:243] addon storage-provisioner should already be in state true
	I1210 00:05:10.778249   83859 addons.go:69] Setting default-storageclass=true in profile "no-preload-048296"
	I1210 00:05:10.778261   83859 addons.go:69] Setting metrics-server=true in profile "no-preload-048296"
	I1210 00:05:10.778276   83859 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-048296"
	I1210 00:05:10.778286   83859 host.go:66] Checking if "no-preload-048296" exists ...
	I1210 00:05:10.778299   83859 addons.go:234] Setting addon metrics-server=true in "no-preload-048296"
	I1210 00:05:10.778339   83859 config.go:182] Loaded profile config "no-preload-048296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1210 00:05:10.778394   83859 addons.go:243] addon metrics-server should already be in state true
	I1210 00:05:10.778446   83859 host.go:66] Checking if "no-preload-048296" exists ...
	I1210 00:05:10.778684   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.778716   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.778746   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.778721   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.778860   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.778897   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.779748   83859 out.go:177] * Verifying Kubernetes components...
	I1210 00:05:10.781467   83859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:05:10.795108   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34465
	I1210 00:05:10.795227   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I1210 00:05:10.795615   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.795709   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.796135   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.796160   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.796221   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.796240   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.796522   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.796539   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.797128   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.797162   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.797200   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.797167   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.797697   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44173
	I1210 00:05:10.798091   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.798587   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.798613   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.798982   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.799324   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.803238   83859 addons.go:234] Setting addon default-storageclass=true in "no-preload-048296"
	W1210 00:05:10.803262   83859 addons.go:243] addon default-storageclass should already be in state true
	I1210 00:05:10.803291   83859 host.go:66] Checking if "no-preload-048296" exists ...
	I1210 00:05:10.803683   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.803722   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.814000   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39657
	I1210 00:05:10.814046   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I1210 00:05:10.814470   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.814686   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.814992   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.815014   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.815427   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.815448   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.815515   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.815748   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.815974   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.816171   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.817989   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1210 00:05:10.818439   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1210 00:05:10.820342   83859 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 00:05:10.820360   83859 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:05:10.821535   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 00:05:10.821556   83859 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 00:05:10.821577   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1210 00:05:10.821652   83859 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:05:10.821673   83859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:05:10.821690   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1210 00:05:10.825763   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826205   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1210 00:05:10.826228   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42881
	I1210 00:05:10.826252   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826275   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1210 00:05:10.826294   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826327   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1210 00:05:10.826228   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826504   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1210 00:05:10.826563   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1210 00:05:10.826633   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.826711   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1210 00:05:10.826860   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1210 00:05:10.826864   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1210 00:05:10.827029   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1210 00:05:10.827152   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.827173   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.827241   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1210 00:05:10.827691   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.829421   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.829471   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.867902   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45189
	I1210 00:05:10.868566   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.869138   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.869169   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.869575   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.869782   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.871531   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1210 00:05:10.871792   83859 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:05:10.871810   83859 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:05:10.871832   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1210 00:05:10.874309   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.874822   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1210 00:05:10.874845   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.874999   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1210 00:05:10.875202   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1210 00:05:10.875354   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1210 00:05:10.875501   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1210 00:05:11.009426   83859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:05:11.030237   83859 node_ready.go:35] waiting up to 6m0s for node "no-preload-048296" to be "Ready" ...
	I1210 00:05:11.054344   83859 node_ready.go:49] node "no-preload-048296" has status "Ready":"True"
	I1210 00:05:11.054366   83859 node_ready.go:38] duration metric: took 24.096361ms for node "no-preload-048296" to be "Ready" ...
	I1210 00:05:11.054376   83859 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:05:11.071740   83859 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:11.123723   83859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:05:11.137744   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 00:05:11.137772   83859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 00:05:11.159680   83859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:05:11.179245   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 00:05:11.179267   83859 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 00:05:11.240553   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:05:11.240580   83859 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 00:05:11.311813   83859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:05:12.041427   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.041463   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.041509   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.041532   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.041886   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.041949   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.041954   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.041963   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.041967   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.041972   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.041932   83859 main.go:141] libmachine: (no-preload-048296) DBG | Closing plugin on server side
	I1210 00:05:12.041986   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.042087   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.042229   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.042245   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.042297   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.042319   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.092616   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.092639   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.092910   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.092923   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.535943   83859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.224081671s)
	I1210 00:05:12.536008   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.536020   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.536456   83859 main.go:141] libmachine: (no-preload-048296) DBG | Closing plugin on server side
	I1210 00:05:12.536459   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.536482   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.536491   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.536498   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.536728   83859 main.go:141] libmachine: (no-preload-048296) DBG | Closing plugin on server side
	I1210 00:05:12.536735   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.536750   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.536761   83859 addons.go:475] Verifying addon metrics-server=true in "no-preload-048296"
	I1210 00:05:12.538638   83859 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 00:05:12.539910   83859 addons.go:510] duration metric: took 1.761785575s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 00:05:13.078954   83859 pod_ready.go:103] pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:05:13.579737   83859 pod_ready.go:93] pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:13.579760   83859 pod_ready.go:82] duration metric: took 2.507994954s for pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:13.579770   83859 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8rxx7" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:14.586682   83859 pod_ready.go:93] pod "coredns-7c65d6cfc9-8rxx7" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:14.586704   83859 pod_ready.go:82] duration metric: took 1.006927767s for pod "coredns-7c65d6cfc9-8rxx7" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:14.586714   83859 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.092318   83859 pod_ready.go:93] pod "etcd-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.092345   83859 pod_ready.go:82] duration metric: took 505.624811ms for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.092356   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.096802   83859 pod_ready.go:93] pod "kube-apiserver-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.096828   83859 pod_ready.go:82] duration metric: took 4.463914ms for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.096839   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.100904   83859 pod_ready.go:93] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.100923   83859 pod_ready.go:82] duration metric: took 4.07832ms for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.100932   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qklxb" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.176527   83859 pod_ready.go:93] pod "kube-proxy-qklxb" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.176552   83859 pod_ready.go:82] duration metric: took 75.613294ms for pod "kube-proxy-qklxb" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.176562   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:17.182952   83859 pod_ready.go:103] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:05:18.181993   83859 pod_ready.go:93] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:18.182016   83859 pod_ready.go:82] duration metric: took 3.005447779s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:18.182024   83859 pod_ready.go:39] duration metric: took 7.127639413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:05:18.182038   83859 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:05:18.182084   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:05:18.196127   83859 api_server.go:72] duration metric: took 7.418043985s to wait for apiserver process to appear ...
	I1210 00:05:18.196155   83859 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:05:18.196176   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:05:18.200537   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 200:
	ok
	I1210 00:05:18.201476   83859 api_server.go:141] control plane version: v1.31.2
	I1210 00:05:18.201501   83859 api_server.go:131] duration metric: took 5.340199ms to wait for apiserver health ...
	I1210 00:05:18.201508   83859 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:05:18.208072   83859 system_pods.go:59] 9 kube-system pods found
	I1210 00:05:18.208105   83859 system_pods.go:61] "coredns-7c65d6cfc9-56djc" [2e25bad9-b88a-4ac8-a180-968bf6b057a2] Running
	I1210 00:05:18.208114   83859 system_pods.go:61] "coredns-7c65d6cfc9-8rxx7" [3fdfd41b-8ef7-41af-a703-ebecfe9ad319] Running
	I1210 00:05:18.208119   83859 system_pods.go:61] "etcd-no-preload-048296" [4447a7d6-df4b-4760-9352-4df0310017ee] Running
	I1210 00:05:18.208125   83859 system_pods.go:61] "kube-apiserver-no-preload-048296" [55a06182-1520-4497-8358-639438eeb297] Running
	I1210 00:05:18.208130   83859 system_pods.go:61] "kube-controller-manager-no-preload-048296" [f30bdb21-7742-46e7-90fe-403679f5e02a] Running
	I1210 00:05:18.208136   83859 system_pods.go:61] "kube-proxy-qklxb" [8bb029a1-abf9-4825-b9ec-0520a78cb3d8] Running
	I1210 00:05:18.208141   83859 system_pods.go:61] "kube-scheduler-no-preload-048296" [019d6d37-c586-4f12-8daf-b60752223cf1] Running
	I1210 00:05:18.208152   83859 system_pods.go:61] "metrics-server-6867b74b74-n2f8c" [8e9f56c9-fd67-4715-9148-1255be17f1fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:05:18.208163   83859 system_pods.go:61] "storage-provisioner" [55fe311f-4610-4805-9fb7-3f1cac7c96e6] Running
	I1210 00:05:18.208176   83859 system_pods.go:74] duration metric: took 6.661729ms to wait for pod list to return data ...
	I1210 00:05:18.208187   83859 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:05:18.375431   83859 default_sa.go:45] found service account: "default"
	I1210 00:05:18.375458   83859 default_sa.go:55] duration metric: took 167.260728ms for default service account to be created ...
	I1210 00:05:18.375467   83859 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:05:18.578148   83859 system_pods.go:86] 9 kube-system pods found
	I1210 00:05:18.578176   83859 system_pods.go:89] "coredns-7c65d6cfc9-56djc" [2e25bad9-b88a-4ac8-a180-968bf6b057a2] Running
	I1210 00:05:18.578183   83859 system_pods.go:89] "coredns-7c65d6cfc9-8rxx7" [3fdfd41b-8ef7-41af-a703-ebecfe9ad319] Running
	I1210 00:05:18.578187   83859 system_pods.go:89] "etcd-no-preload-048296" [4447a7d6-df4b-4760-9352-4df0310017ee] Running
	I1210 00:05:18.578191   83859 system_pods.go:89] "kube-apiserver-no-preload-048296" [55a06182-1520-4497-8358-639438eeb297] Running
	I1210 00:05:18.578195   83859 system_pods.go:89] "kube-controller-manager-no-preload-048296" [f30bdb21-7742-46e7-90fe-403679f5e02a] Running
	I1210 00:05:18.578198   83859 system_pods.go:89] "kube-proxy-qklxb" [8bb029a1-abf9-4825-b9ec-0520a78cb3d8] Running
	I1210 00:05:18.578203   83859 system_pods.go:89] "kube-scheduler-no-preload-048296" [019d6d37-c586-4f12-8daf-b60752223cf1] Running
	I1210 00:05:18.578209   83859 system_pods.go:89] "metrics-server-6867b74b74-n2f8c" [8e9f56c9-fd67-4715-9148-1255be17f1fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:05:18.578213   83859 system_pods.go:89] "storage-provisioner" [55fe311f-4610-4805-9fb7-3f1cac7c96e6] Running
	I1210 00:05:18.578223   83859 system_pods.go:126] duration metric: took 202.750724ms to wait for k8s-apps to be running ...
	I1210 00:05:18.578233   83859 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:05:18.578277   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:05:18.592635   83859 system_svc.go:56] duration metric: took 14.391582ms WaitForService to wait for kubelet
	I1210 00:05:18.592669   83859 kubeadm.go:582] duration metric: took 7.814589832s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:05:18.592690   83859 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:05:18.776611   83859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:05:18.776632   83859 node_conditions.go:123] node cpu capacity is 2
	I1210 00:05:18.776642   83859 node_conditions.go:105] duration metric: took 183.94737ms to run NodePressure ...
	I1210 00:05:18.776654   83859 start.go:241] waiting for startup goroutines ...
	I1210 00:05:18.776660   83859 start.go:246] waiting for cluster config update ...
	I1210 00:05:18.776672   83859 start.go:255] writing updated cluster config ...
	I1210 00:05:18.776944   83859 ssh_runner.go:195] Run: rm -f paused
	I1210 00:05:18.826550   83859 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:05:18.828405   83859 out.go:177] * Done! kubectl is now configured to use "no-preload-048296" cluster and "default" namespace by default
	I1210 00:05:43.088214   84547 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 00:05:43.088352   84547 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 00:05:43.089912   84547 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 00:05:43.089988   84547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:05:43.090054   84547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:05:43.090140   84547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:05:43.090225   84547 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 00:05:43.090302   84547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:05:43.092050   84547 out.go:235]   - Generating certificates and keys ...
	I1210 00:05:43.092141   84547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:05:43.092210   84547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:05:43.092305   84547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:05:43.092392   84547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:05:43.092493   84547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:05:43.092569   84547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:05:43.092680   84547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:05:43.092761   84547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:05:43.092865   84547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:05:43.092975   84547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:05:43.093045   84547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:05:43.093143   84547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:05:43.093188   84547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:05:43.093236   84547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:05:43.093317   84547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:05:43.093402   84547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:05:43.093561   84547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:05:43.093728   84547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:05:43.093785   84547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:05:43.093855   84547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:05:43.095285   84547 out.go:235]   - Booting up control plane ...
	I1210 00:05:43.095396   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:05:43.095469   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:05:43.095525   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:05:43.095630   84547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:05:43.095804   84547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 00:05:43.095873   84547 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 00:05:43.095960   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096155   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096237   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096414   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096486   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096679   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096741   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096904   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096969   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.097122   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.097129   84547 kubeadm.go:310] 
	I1210 00:05:43.097167   84547 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 00:05:43.097202   84547 kubeadm.go:310] 		timed out waiting for the condition
	I1210 00:05:43.097211   84547 kubeadm.go:310] 
	I1210 00:05:43.097251   84547 kubeadm.go:310] 	This error is likely caused by:
	I1210 00:05:43.097280   84547 kubeadm.go:310] 		- The kubelet is not running
	I1210 00:05:43.097373   84547 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 00:05:43.097379   84547 kubeadm.go:310] 
	I1210 00:05:43.097465   84547 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 00:05:43.097495   84547 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 00:05:43.097522   84547 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 00:05:43.097531   84547 kubeadm.go:310] 
	I1210 00:05:43.097619   84547 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 00:05:43.097688   84547 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 00:05:43.097694   84547 kubeadm.go:310] 
	I1210 00:05:43.097794   84547 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 00:05:43.097875   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 00:05:43.097941   84547 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 00:05:43.098038   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 00:05:43.098124   84547 kubeadm.go:310] 
	W1210 00:05:43.098179   84547 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 00:05:43.098224   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:05:48.532537   84547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.434287494s)
	I1210 00:05:48.532615   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:05:48.546394   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:05:48.555650   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:05:48.555669   84547 kubeadm.go:157] found existing configuration files:
	
	I1210 00:05:48.555721   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:05:48.565301   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:05:48.565368   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:05:48.575330   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:05:48.584238   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:05:48.584324   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:05:48.593395   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:05:48.602150   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:05:48.602209   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:05:48.611177   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:05:48.619837   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:05:48.619907   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:05:48.629126   84547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:05:48.827680   84547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:07:44.857816   84547 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 00:07:44.857930   84547 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 00:07:44.859490   84547 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 00:07:44.859542   84547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:07:44.859627   84547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:07:44.859758   84547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:07:44.859894   84547 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 00:07:44.859953   84547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:07:44.861538   84547 out.go:235]   - Generating certificates and keys ...
	I1210 00:07:44.861657   84547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:07:44.861736   84547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:07:44.861809   84547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:07:44.861861   84547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:07:44.861920   84547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:07:44.861969   84547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:07:44.862024   84547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:07:44.862077   84547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:07:44.862143   84547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:07:44.862212   84547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:07:44.862246   84547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:07:44.862293   84547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:07:44.862343   84547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:07:44.862408   84547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:07:44.862505   84547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:07:44.862615   84547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:07:44.862765   84547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:07:44.862897   84547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:07:44.862955   84547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:07:44.863059   84547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:07:44.865485   84547 out.go:235]   - Booting up control plane ...
	I1210 00:07:44.865595   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:07:44.865720   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:07:44.865815   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:07:44.865929   84547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:07:44.866136   84547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 00:07:44.866209   84547 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 00:07:44.866332   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866517   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.866586   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866752   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.866818   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866976   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867042   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.867210   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867323   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.867512   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867522   84547 kubeadm.go:310] 
	I1210 00:07:44.867611   84547 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 00:07:44.867671   84547 kubeadm.go:310] 		timed out waiting for the condition
	I1210 00:07:44.867691   84547 kubeadm.go:310] 
	I1210 00:07:44.867729   84547 kubeadm.go:310] 	This error is likely caused by:
	I1210 00:07:44.867766   84547 kubeadm.go:310] 		- The kubelet is not running
	I1210 00:07:44.867880   84547 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 00:07:44.867895   84547 kubeadm.go:310] 
	I1210 00:07:44.868055   84547 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 00:07:44.868105   84547 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 00:07:44.868138   84547 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 00:07:44.868146   84547 kubeadm.go:310] 
	I1210 00:07:44.868250   84547 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 00:07:44.868354   84547 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 00:07:44.868362   84547 kubeadm.go:310] 
	I1210 00:07:44.868464   84547 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 00:07:44.868540   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 00:07:44.868606   84547 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 00:07:44.868689   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 00:07:44.868741   84547 kubeadm.go:310] 
	I1210 00:07:44.868762   84547 kubeadm.go:394] duration metric: took 8m1.943355039s to StartCluster
	I1210 00:07:44.868799   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:07:44.868852   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:07:44.906641   84547 cri.go:89] found id: ""
	I1210 00:07:44.906667   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.906675   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:07:44.906681   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:07:44.906734   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:07:44.942832   84547 cri.go:89] found id: ""
	I1210 00:07:44.942863   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.942872   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:07:44.942881   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:07:44.942945   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:07:44.978010   84547 cri.go:89] found id: ""
	I1210 00:07:44.978034   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.978042   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:07:44.978047   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:07:44.978108   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:07:45.017066   84547 cri.go:89] found id: ""
	I1210 00:07:45.017089   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.017097   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:07:45.017110   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:07:45.017172   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:07:45.049757   84547 cri.go:89] found id: ""
	I1210 00:07:45.049778   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.049786   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:07:45.049791   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:07:45.049842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:07:45.082754   84547 cri.go:89] found id: ""
	I1210 00:07:45.082789   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.082800   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:07:45.082807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:07:45.082933   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:07:45.117188   84547 cri.go:89] found id: ""
	I1210 00:07:45.117219   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.117227   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:07:45.117233   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:07:45.117302   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:07:45.153747   84547 cri.go:89] found id: ""
	I1210 00:07:45.153776   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.153785   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:07:45.153795   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:07:45.153810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:07:45.207625   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:07:45.207662   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:07:45.220929   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:07:45.220957   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:07:45.291850   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:07:45.291883   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:07:45.291899   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:07:45.397592   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:07:45.397629   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 00:07:45.446755   84547 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1210 00:07:45.446816   84547 out.go:270] * 
	W1210 00:07:45.446898   84547 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 00:07:45.446921   84547 out.go:270] * 
	W1210 00:07:45.448145   84547 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 00:07:45.452118   84547 out.go:201] 
	W1210 00:07:45.453335   84547 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 00:07:45.453398   84547 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 00:07:45.453438   84547 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 00:07:45.455704   84547 out.go:201] 
	
	
	==> CRI-O <==
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.351074871Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789267351055822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7dd78ee6-dc06-4e0e-8bc2-0e5abe15d2fb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.351561639Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcc8c009-97ad-4c44-9517-43394397a268 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.351638232Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcc8c009-97ad-4c44-9517-43394397a268 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.351688507Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dcc8c009-97ad-4c44-9517-43394397a268 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.386548406Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a0b91699-21d4-4aba-ba64-dcd60134084d name=/runtime.v1.RuntimeService/Version
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.386653031Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a0b91699-21d4-4aba-ba64-dcd60134084d name=/runtime.v1.RuntimeService/Version
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.387903556Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d127bc8-40a9-4303-95ad-27ea17abefd5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.388251856Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789267388231179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d127bc8-40a9-4303-95ad-27ea17abefd5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.388734679Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47c58ea3-4311-4b2c-801c-2f6f53885481 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.388785086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47c58ea3-4311-4b2c-801c-2f6f53885481 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.388816562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=47c58ea3-4311-4b2c-801c-2f6f53885481 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.420887186Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=73686fe3-8d28-4911-804e-4554400b73db name=/runtime.v1.RuntimeService/Version
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.420964184Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=73686fe3-8d28-4911-804e-4554400b73db name=/runtime.v1.RuntimeService/Version
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.421832145Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=299f4e99-490c-45a2-9ce7-55050c607c39 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.422224011Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789267422198931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=299f4e99-490c-45a2-9ce7-55050c607c39 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.422758966Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1015b1ee-4d50-4fde-b3a1-67b6a6806d04 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.422809590Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1015b1ee-4d50-4fde-b3a1-67b6a6806d04 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.422850256Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1015b1ee-4d50-4fde-b3a1-67b6a6806d04 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.456191764Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=061a2669-8cf0-4f21-b60a-a9c3fefd119c name=/runtime.v1.RuntimeService/Version
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.456285869Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=061a2669-8cf0-4f21-b60a-a9c3fefd119c name=/runtime.v1.RuntimeService/Version
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.457148377Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d7e7838-d6e1-45d1-8c63-35f4b5892071 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.457517297Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789267457496843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d7e7838-d6e1-45d1-8c63-35f4b5892071 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.458378788Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33e548fb-35a2-4073-a4e6-d27334fa8676 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.458430719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33e548fb-35a2-4073-a4e6-d27334fa8676 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:07:47 old-k8s-version-720064 crio[624]: time="2024-12-10 00:07:47.458465128Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=33e548fb-35a2-4073-a4e6-d27334fa8676 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 23:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049452] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044986] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.967295] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.057304] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.622707] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.503870] systemd-fstab-generator[550]: Ignoring "noauto" option for root device
	[  +0.058578] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077964] systemd-fstab-generator[562]: Ignoring "noauto" option for root device
	[  +0.212967] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.149461] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.273531] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +6.290551] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.069158] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.965394] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[ +12.784108] kauditd_printk_skb: 46 callbacks suppressed
	[Dec10 00:03] systemd-fstab-generator[5026]: Ignoring "noauto" option for root device
	[Dec10 00:05] systemd-fstab-generator[5316]: Ignoring "noauto" option for root device
	[  +0.059046] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 00:07:47 up 8 min,  0 users,  load average: 0.01, 0.06, 0.04
	Linux old-k8s-version-720064 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 10 00:07:44 old-k8s-version-720064 kubelet[5498]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0008422c0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000ba5b90, 0x24, 0x60, 0x7f4d2c073610, 0x118, ...)
	Dec 10 00:07:44 old-k8s-version-720064 kubelet[5498]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Dec 10 00:07:44 old-k8s-version-720064 kubelet[5498]: net/http.(*Transport).dial(0xc00065f7c0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000ba5b90, 0x24, 0x0, 0x0, 0x0, ...)
	Dec 10 00:07:44 old-k8s-version-720064 kubelet[5498]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Dec 10 00:07:44 old-k8s-version-720064 kubelet[5498]: net/http.(*Transport).dialConn(0xc00065f7c0, 0x4f7fe00, 0xc000120018, 0x0, 0xc000c407e0, 0x5, 0xc000ba5b90, 0x24, 0x0, 0xc000c52120, ...)
	Dec 10 00:07:44 old-k8s-version-720064 kubelet[5498]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Dec 10 00:07:44 old-k8s-version-720064 kubelet[5498]: net/http.(*Transport).dialConnFor(0xc00065f7c0, 0xc000b93340)
	Dec 10 00:07:44 old-k8s-version-720064 kubelet[5498]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Dec 10 00:07:44 old-k8s-version-720064 kubelet[5498]: created by net/http.(*Transport).queueForDial
	Dec 10 00:07:44 old-k8s-version-720064 kubelet[5498]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Dec 10 00:07:44 old-k8s-version-720064 kubelet[5498]: goroutine 163 [select]:
	Dec 10 00:07:44 old-k8s-version-720064 kubelet[5498]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000c54420, 0xc000c62080, 0xc000c40a80, 0xc000c40a20)
	Dec 10 00:07:44 old-k8s-version-720064 kubelet[5498]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Dec 10 00:07:44 old-k8s-version-720064 kubelet[5498]: created by net.(*netFD).connect
	Dec 10 00:07:44 old-k8s-version-720064 kubelet[5498]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Dec 10 00:07:44 old-k8s-version-720064 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 10 00:07:44 old-k8s-version-720064 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 00:07:45 old-k8s-version-720064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Dec 10 00:07:45 old-k8s-version-720064 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 10 00:07:45 old-k8s-version-720064 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 10 00:07:45 old-k8s-version-720064 kubelet[5557]: I1210 00:07:45.488508    5557 server.go:416] Version: v1.20.0
	Dec 10 00:07:45 old-k8s-version-720064 kubelet[5557]: I1210 00:07:45.488874    5557 server.go:837] Client rotation is on, will bootstrap in background
	Dec 10 00:07:45 old-k8s-version-720064 kubelet[5557]: I1210 00:07:45.490875    5557 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 10 00:07:45 old-k8s-version-720064 kubelet[5557]: W1210 00:07:45.491838    5557 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 10 00:07:45 old-k8s-version-720064 kubelet[5557]: I1210 00:07:45.491884    5557 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-720064 -n old-k8s-version-720064
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-720064 -n old-k8s-version-720064: exit status 2 (248.314874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-720064" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (743.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1210 00:03:38.330970   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:04:20.544440   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-825613 -n embed-certs-825613
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-10 00:12:36.200968756 +0000 UTC m=+6040.632583856
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-825613 -n embed-certs-825613
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-825613 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-825613 logs -n 25: (1.929680894s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-030585 sudo cat                              | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo find                             | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo crio                             | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-030585                                       | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-866797 | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | disable-driver-mounts-866797                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:52 UTC |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-048296             | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC | 09 Dec 24 23:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-825613            | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC | 09 Dec 24 23:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-825613                                  | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-871210  | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:52 UTC | 09 Dec 24 23:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:52 UTC |                     |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-720064        | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-048296                  | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-825613                 | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC | 10 Dec 24 00:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-825613                                  | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC | 10 Dec 24 00:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-871210       | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:54 UTC | 10 Dec 24 00:04 UTC |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-720064             | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:55:25
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:55:25.509412   84547 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:55:25.509516   84547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:25.509524   84547 out.go:358] Setting ErrFile to fd 2...
	I1209 23:55:25.509541   84547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:25.509720   84547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 23:55:25.510225   84547 out.go:352] Setting JSON to false
	I1209 23:55:25.511117   84547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9476,"bootTime":1733779049,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:55:25.511206   84547 start.go:139] virtualization: kvm guest
	I1209 23:55:25.513214   84547 out.go:177] * [old-k8s-version-720064] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:55:25.514823   84547 notify.go:220] Checking for updates...
	I1209 23:55:25.514845   84547 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:55:25.516067   84547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:55:25.517350   84547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:55:25.518678   84547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 23:55:25.520082   84547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:55:25.521432   84547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:55:25.523092   84547 config.go:182] Loaded profile config "old-k8s-version-720064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 23:55:25.523499   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:55:25.523548   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:55:25.538209   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36805
	I1209 23:55:25.538728   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:55:25.539263   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:55:25.539282   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:55:25.539570   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:55:25.539735   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:55:25.541550   84547 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1209 23:55:25.542864   84547 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:55:25.543159   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:55:25.543196   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:55:25.558672   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I1209 23:55:25.559075   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:55:25.559524   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:55:25.559547   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:55:25.559881   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:55:25.560082   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:55:25.595916   84547 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 23:55:25.597365   84547 start.go:297] selected driver: kvm2
	I1209 23:55:25.597380   84547 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:55:25.597473   84547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:55:25.598182   84547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:55:25.598251   84547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 23:55:25.612993   84547 install.go:137] /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1209 23:55:25.613401   84547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:55:25.613430   84547 cni.go:84] Creating CNI manager for ""
	I1209 23:55:25.613473   84547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:55:25.613509   84547 start.go:340] cluster config:
	{Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:55:25.613608   84547 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:55:25.615371   84547 out.go:177] * Starting "old-k8s-version-720064" primary control-plane node in "old-k8s-version-720064" cluster
	I1209 23:55:25.371778   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:25.616599   84547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:55:25.616629   84547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1209 23:55:25.616636   84547 cache.go:56] Caching tarball of preloaded images
	I1209 23:55:25.616716   84547 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 23:55:25.616727   84547 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1209 23:55:25.616809   84547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/config.json ...
	I1209 23:55:25.616984   84547 start.go:360] acquireMachinesLock for old-k8s-version-720064: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:55:28.439810   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:34.519811   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:37.591794   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:43.671858   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:46.743891   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:52.823826   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:55.895854   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:01.975845   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:05.047862   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:11.127840   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:14.199834   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:20.279853   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:23.351850   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:29.431829   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:32.503859   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:38.583822   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:41.655831   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:47.735829   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:50.807862   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:56.887827   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:59.959820   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:06.039852   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:09.111843   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:15.191823   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:18.263824   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:24.343852   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:27.415807   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:33.495843   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:36.567854   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:42.647854   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:45.719902   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:51.799842   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:54.871862   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:00.951877   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:04.023859   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:10.103869   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:13.175788   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:19.255805   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:22.327784   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:28.407842   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:31.479864   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:34.483630   83900 start.go:364] duration metric: took 4m35.142298634s to acquireMachinesLock for "embed-certs-825613"
	I1209 23:58:34.483690   83900 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:58:34.483698   83900 fix.go:54] fixHost starting: 
	I1209 23:58:34.484038   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:58:34.484080   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:58:34.499267   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I1209 23:58:34.499717   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:58:34.500208   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:58:34.500236   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:58:34.500552   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:58:34.500747   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:34.500886   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:58:34.502318   83900 fix.go:112] recreateIfNeeded on embed-certs-825613: state=Stopped err=<nil>
	I1209 23:58:34.502340   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	W1209 23:58:34.502480   83900 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:58:34.504290   83900 out.go:177] * Restarting existing kvm2 VM for "embed-certs-825613" ...
	I1209 23:58:34.481505   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:58:34.481545   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:58:34.481889   83859 buildroot.go:166] provisioning hostname "no-preload-048296"
	I1209 23:58:34.481917   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:58:34.482120   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:58:34.483492   83859 machine.go:96] duration metric: took 4m37.418278223s to provisionDockerMachine
	I1209 23:58:34.483534   83859 fix.go:56] duration metric: took 4m37.439725581s for fixHost
	I1209 23:58:34.483542   83859 start.go:83] releasing machines lock for "no-preload-048296", held for 4m37.439753726s
	W1209 23:58:34.483579   83859 start.go:714] error starting host: provision: host is not running
	W1209 23:58:34.483682   83859 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1209 23:58:34.483694   83859 start.go:729] Will try again in 5 seconds ...
	I1209 23:58:34.505520   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Start
	I1209 23:58:34.505676   83900 main.go:141] libmachine: (embed-certs-825613) Ensuring networks are active...
	I1209 23:58:34.506381   83900 main.go:141] libmachine: (embed-certs-825613) Ensuring network default is active
	I1209 23:58:34.506676   83900 main.go:141] libmachine: (embed-certs-825613) Ensuring network mk-embed-certs-825613 is active
	I1209 23:58:34.507046   83900 main.go:141] libmachine: (embed-certs-825613) Getting domain xml...
	I1209 23:58:34.507727   83900 main.go:141] libmachine: (embed-certs-825613) Creating domain...
	I1209 23:58:35.719784   83900 main.go:141] libmachine: (embed-certs-825613) Waiting to get IP...
	I1209 23:58:35.720797   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:35.721325   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:35.721377   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:35.721289   85186 retry.go:31] will retry after 251.225165ms: waiting for machine to come up
	I1209 23:58:35.973801   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:35.974247   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:35.974276   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:35.974205   85186 retry.go:31] will retry after 298.029679ms: waiting for machine to come up
	I1209 23:58:36.273658   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:36.274090   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:36.274119   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:36.274049   85186 retry.go:31] will retry after 467.217404ms: waiting for machine to come up
	I1209 23:58:36.742591   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:36.743027   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:36.743052   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:36.742982   85186 retry.go:31] will retry after 409.741731ms: waiting for machine to come up
	I1209 23:58:37.154543   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:37.154958   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:37.154982   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:37.154916   85186 retry.go:31] will retry after 723.02759ms: waiting for machine to come up
	I1209 23:58:37.878986   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:37.879474   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:37.879507   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:37.879423   85186 retry.go:31] will retry after 767.399861ms: waiting for machine to come up
	I1209 23:58:38.648416   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:38.648903   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:38.648927   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:38.648853   85186 retry.go:31] will retry after 1.06423499s: waiting for machine to come up
	I1209 23:58:39.485363   83859 start.go:360] acquireMachinesLock for no-preload-048296: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:58:39.714711   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:39.715114   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:39.715147   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:39.715058   85186 retry.go:31] will retry after 1.402884688s: waiting for machine to come up
	I1209 23:58:41.119828   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:41.120315   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:41.120359   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:41.120258   85186 retry.go:31] will retry after 1.339874475s: waiting for machine to come up
	I1209 23:58:42.461858   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:42.462314   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:42.462343   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:42.462261   85186 retry.go:31] will retry after 1.455809273s: waiting for machine to come up
	I1209 23:58:43.920097   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:43.920418   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:43.920448   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:43.920371   85186 retry.go:31] will retry after 2.872969061s: waiting for machine to come up
	I1209 23:58:46.796163   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:46.796477   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:46.796499   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:46.796439   85186 retry.go:31] will retry after 2.530677373s: waiting for machine to come up
	I1209 23:58:49.330146   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:49.330601   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:49.330631   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:49.330561   85186 retry.go:31] will retry after 4.492372507s: waiting for machine to come up
	I1209 23:58:53.827982   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.828461   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has current primary IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.828480   83900 main.go:141] libmachine: (embed-certs-825613) Found IP for machine: 192.168.50.19
	I1209 23:58:53.828492   83900 main.go:141] libmachine: (embed-certs-825613) Reserving static IP address...
	I1209 23:58:53.829001   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "embed-certs-825613", mac: "52:54:00:0f:2e:da", ip: "192.168.50.19"} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.829028   83900 main.go:141] libmachine: (embed-certs-825613) Reserved static IP address: 192.168.50.19
	I1209 23:58:53.829051   83900 main.go:141] libmachine: (embed-certs-825613) DBG | skip adding static IP to network mk-embed-certs-825613 - found existing host DHCP lease matching {name: "embed-certs-825613", mac: "52:54:00:0f:2e:da", ip: "192.168.50.19"}
	I1209 23:58:53.829067   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Getting to WaitForSSH function...
	I1209 23:58:53.829080   83900 main.go:141] libmachine: (embed-certs-825613) Waiting for SSH to be available...
	I1209 23:58:53.831079   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.831430   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.831462   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.831630   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Using SSH client type: external
	I1209 23:58:53.831682   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa (-rw-------)
	I1209 23:58:53.831723   83900 main.go:141] libmachine: (embed-certs-825613) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:58:53.831752   83900 main.go:141] libmachine: (embed-certs-825613) DBG | About to run SSH command:
	I1209 23:58:53.831765   83900 main.go:141] libmachine: (embed-certs-825613) DBG | exit 0
	I1209 23:58:53.959446   83900 main.go:141] libmachine: (embed-certs-825613) DBG | SSH cmd err, output: <nil>: 
	I1209 23:58:53.959864   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetConfigRaw
	I1209 23:58:53.960515   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:53.963227   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.963614   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.963644   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.963889   83900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/config.json ...
	I1209 23:58:53.964086   83900 machine.go:93] provisionDockerMachine start ...
	I1209 23:58:53.964103   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:53.964299   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:53.966516   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.966803   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.966834   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.966959   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:53.967149   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:53.967292   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:53.967428   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:53.967599   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:53.967824   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:53.967839   83900 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:58:54.079701   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:58:54.079732   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetMachineName
	I1209 23:58:54.080045   83900 buildroot.go:166] provisioning hostname "embed-certs-825613"
	I1209 23:58:54.080079   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetMachineName
	I1209 23:58:54.080333   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.082930   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.083283   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.083321   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.083420   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.083657   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.083834   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.083974   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.084095   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.084269   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.084282   83900 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-825613 && echo "embed-certs-825613" | sudo tee /etc/hostname
	I1209 23:58:54.208724   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-825613
	
	I1209 23:58:54.208754   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.211739   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.212095   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.212122   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.212297   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.212513   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.212763   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.212936   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.213102   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.213285   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.213311   83900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-825613' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-825613/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-825613' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:58:55.043989   84259 start.go:364] duration metric: took 4m2.194304919s to acquireMachinesLock for "default-k8s-diff-port-871210"
	I1209 23:58:55.044059   84259 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:58:55.044067   84259 fix.go:54] fixHost starting: 
	I1209 23:58:55.044480   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:58:55.044546   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:58:55.060908   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45091
	I1209 23:58:55.061391   84259 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:58:55.061954   84259 main.go:141] libmachine: Using API Version  1
	I1209 23:58:55.061982   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:58:55.062346   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:58:55.062508   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:58:55.062674   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1209 23:58:55.064365   84259 fix.go:112] recreateIfNeeded on default-k8s-diff-port-871210: state=Stopped err=<nil>
	I1209 23:58:55.064386   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	W1209 23:58:55.064564   84259 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:58:55.066707   84259 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-871210" ...
	I1209 23:58:54.331911   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:58:54.331941   83900 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:58:54.331966   83900 buildroot.go:174] setting up certificates
	I1209 23:58:54.331978   83900 provision.go:84] configureAuth start
	I1209 23:58:54.331991   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetMachineName
	I1209 23:58:54.332275   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:54.334597   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.334926   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.334957   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.335117   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.337393   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.337731   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.337763   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.337876   83900 provision.go:143] copyHostCerts
	I1209 23:58:54.337923   83900 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:58:54.337936   83900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:58:54.338006   83900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:58:54.338093   83900 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:58:54.338101   83900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:58:54.338127   83900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:58:54.338204   83900 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:58:54.338212   83900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:58:54.338233   83900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:58:54.338286   83900 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.embed-certs-825613 san=[127.0.0.1 192.168.50.19 embed-certs-825613 localhost minikube]
	I1209 23:58:54.423610   83900 provision.go:177] copyRemoteCerts
	I1209 23:58:54.423679   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:58:54.423706   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.426695   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.427009   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.427041   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.427144   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.427332   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.427516   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.427706   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:54.513991   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 23:58:54.536700   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:58:54.558541   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:58:54.580319   83900 provision.go:87] duration metric: took 248.326924ms to configureAuth
	I1209 23:58:54.580359   83900 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:58:54.580568   83900 config.go:182] Loaded profile config "embed-certs-825613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:58:54.580652   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.583347   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.583720   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.583748   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.583963   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.584171   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.584327   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.584481   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.584621   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.584775   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.584790   83900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:58:54.804814   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:58:54.804860   83900 machine.go:96] duration metric: took 840.759848ms to provisionDockerMachine
	I1209 23:58:54.804876   83900 start.go:293] postStartSetup for "embed-certs-825613" (driver="kvm2")
	I1209 23:58:54.804891   83900 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:58:54.804916   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:54.805269   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:58:54.805297   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.807977   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.808340   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.808370   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.808505   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.808765   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.808945   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.809100   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:54.893741   83900 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:58:54.897936   83900 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:58:54.897961   83900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:58:54.898065   83900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:58:54.898145   83900 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:58:54.898235   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:58:54.907056   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:58:54.929618   83900 start.go:296] duration metric: took 124.726532ms for postStartSetup
	I1209 23:58:54.929664   83900 fix.go:56] duration metric: took 20.445966428s for fixHost
	I1209 23:58:54.929711   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.932476   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.932866   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.932893   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.933108   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.933334   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.933523   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.933638   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.933747   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.933956   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.933967   83900 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:58:55.043841   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788735.012193373
	
	I1209 23:58:55.043863   83900 fix.go:216] guest clock: 1733788735.012193373
	I1209 23:58:55.043870   83900 fix.go:229] Guest: 2024-12-09 23:58:55.012193373 +0000 UTC Remote: 2024-12-09 23:58:54.929689658 +0000 UTC m=+295.728685701 (delta=82.503715ms)
	I1209 23:58:55.043888   83900 fix.go:200] guest clock delta is within tolerance: 82.503715ms
	I1209 23:58:55.043892   83900 start.go:83] releasing machines lock for "embed-certs-825613", held for 20.560223511s
	I1209 23:58:55.043915   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.044198   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:55.046852   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.047246   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:55.047283   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.047446   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.047910   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.048101   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.048198   83900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:58:55.048235   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:55.048424   83900 ssh_runner.go:195] Run: cat /version.json
	I1209 23:58:55.048453   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:55.050905   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051274   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:55.051317   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051339   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051502   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:55.051721   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:55.051772   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:55.051794   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051925   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:55.051980   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:55.052091   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:55.052133   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:55.052263   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:55.052407   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:55.136384   83900 ssh_runner.go:195] Run: systemctl --version
	I1209 23:58:55.154927   83900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:58:55.297639   83900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:58:55.303733   83900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:58:55.303791   83900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:58:55.323858   83900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:58:55.323884   83900 start.go:495] detecting cgroup driver to use...
	I1209 23:58:55.323955   83900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:58:55.342390   83900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:58:55.360400   83900 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:58:55.360463   83900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:58:55.374005   83900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:58:55.387515   83900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:58:55.507866   83900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:58:55.676259   83900 docker.go:233] disabling docker service ...
	I1209 23:58:55.676338   83900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:58:55.695273   83900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:58:55.707909   83900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:58:55.824683   83900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:58:55.934080   83900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:58:55.950700   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:58:55.967756   83900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:58:55.967813   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:55.978028   83900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:58:55.978102   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:55.988960   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:55.999661   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.010253   83900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:58:56.020928   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.030944   83900 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.050251   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.062723   83900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:58:56.072435   83900 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:58:56.072501   83900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:58:56.085332   83900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:58:56.095538   83900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:58:56.214133   83900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:58:56.310023   83900 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:58:56.310107   83900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:58:56.314988   83900 start.go:563] Will wait 60s for crictl version
	I1209 23:58:56.315057   83900 ssh_runner.go:195] Run: which crictl
	I1209 23:58:56.318865   83900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:58:56.356889   83900 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:58:56.356996   83900 ssh_runner.go:195] Run: crio --version
	I1209 23:58:56.387128   83900 ssh_runner.go:195] Run: crio --version
	I1209 23:58:56.417781   83900 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:58:55.068139   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Start
	I1209 23:58:55.068329   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Ensuring networks are active...
	I1209 23:58:55.069026   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Ensuring network default is active
	I1209 23:58:55.069526   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Ensuring network mk-default-k8s-diff-port-871210 is active
	I1209 23:58:55.069970   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Getting domain xml...
	I1209 23:58:55.070725   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Creating domain...
	I1209 23:58:56.366161   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting to get IP...
	I1209 23:58:56.367215   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.367659   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.367720   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:56.367639   85338 retry.go:31] will retry after 212.733452ms: waiting for machine to come up
	I1209 23:58:56.582400   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.582945   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.582973   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:56.582895   85338 retry.go:31] will retry after 380.03081ms: waiting for machine to come up
	I1209 23:58:56.964721   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.965265   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.965296   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:56.965208   85338 retry.go:31] will retry after 429.612511ms: waiting for machine to come up
	I1209 23:58:57.396713   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.397267   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.397298   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:57.397236   85338 retry.go:31] will retry after 595.581233ms: waiting for machine to come up
	I1209 23:58:56.418967   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:56.422317   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:56.422632   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:56.422656   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:56.422925   83900 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1209 23:58:56.426992   83900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:58:56.440436   83900 kubeadm.go:883] updating cluster {Name:embed-certs-825613 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-825613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:58:56.440548   83900 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:58:56.440592   83900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:58:56.475823   83900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:58:56.475907   83900 ssh_runner.go:195] Run: which lz4
	I1209 23:58:56.479747   83900 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:58:56.483850   83900 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:58:56.483900   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 23:58:57.820979   83900 crio.go:462] duration metric: took 1.341265764s to copy over tarball
	I1209 23:58:57.821091   83900 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:58:57.994077   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.994566   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.994590   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:57.994526   85338 retry.go:31] will retry after 728.981009ms: waiting for machine to come up
	I1209 23:58:58.725707   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:58.726209   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:58.726252   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:58.726125   85338 retry.go:31] will retry after 701.836089ms: waiting for machine to come up
	I1209 23:58:59.429804   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:59.430322   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:59.430350   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:59.430266   85338 retry.go:31] will retry after 735.800538ms: waiting for machine to come up
	I1209 23:59:00.167774   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:00.168363   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:00.168397   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:00.168319   85338 retry.go:31] will retry after 1.052511845s: waiting for machine to come up
	I1209 23:59:01.222463   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:01.222960   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:01.222991   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:01.222919   85338 retry.go:31] will retry after 1.655880765s: waiting for machine to come up
	I1209 23:58:59.925304   83900 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.104178296s)
	I1209 23:58:59.925338   83900 crio.go:469] duration metric: took 2.104325305s to extract the tarball
	I1209 23:58:59.925348   83900 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:58:59.962716   83900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:00.008762   83900 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:59:00.008793   83900 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:59:00.008803   83900 kubeadm.go:934] updating node { 192.168.50.19 8443 v1.31.2 crio true true} ...
	I1209 23:59:00.008929   83900 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-825613 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-825613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:59:00.009008   83900 ssh_runner.go:195] Run: crio config
	I1209 23:59:00.050777   83900 cni.go:84] Creating CNI manager for ""
	I1209 23:59:00.050801   83900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:00.050813   83900 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:59:00.050842   83900 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.19 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-825613 NodeName:embed-certs-825613 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:59:00.051001   83900 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-825613"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.19"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.19"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:59:00.051083   83900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:59:00.062278   83900 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:59:00.062354   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:59:00.073157   83900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1209 23:59:00.088933   83900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:59:00.104358   83900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1209 23:59:00.122425   83900 ssh_runner.go:195] Run: grep 192.168.50.19	control-plane.minikube.internal$ /etc/hosts
	I1209 23:59:00.126224   83900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:00.138974   83900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:00.252489   83900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:00.269127   83900 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613 for IP: 192.168.50.19
	I1209 23:59:00.269155   83900 certs.go:194] generating shared ca certs ...
	I1209 23:59:00.269177   83900 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:00.269359   83900 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:59:00.269406   83900 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:59:00.269421   83900 certs.go:256] generating profile certs ...
	I1209 23:59:00.269551   83900 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/client.key
	I1209 23:59:00.269651   83900 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/apiserver.key.12f830e7
	I1209 23:59:00.269724   83900 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/proxy-client.key
	I1209 23:59:00.269901   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:59:00.269954   83900 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:59:00.269968   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:59:00.270007   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:59:00.270056   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:59:00.270096   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:59:00.270161   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:00.271078   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:59:00.322392   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:59:00.351665   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:59:00.378615   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:59:00.401622   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1209 23:59:00.430280   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 23:59:00.453489   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:59:00.478368   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:59:00.501404   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:59:00.524273   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:59:00.546132   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:59:00.568553   83900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:59:00.584196   83900 ssh_runner.go:195] Run: openssl version
	I1209 23:59:00.589905   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:59:00.600107   83900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:59:00.604436   83900 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:59:00.604504   83900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:59:00.609960   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:59:00.620129   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:59:00.629895   83900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:00.633993   83900 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:00.634053   83900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:00.639538   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:59:00.649781   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:59:00.659593   83900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:59:00.663789   83900 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:59:00.663832   83900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:59:00.669033   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:59:00.678881   83900 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:59:00.683006   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:59:00.688631   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:59:00.694212   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:59:00.699930   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:59:00.705400   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:59:00.710668   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:59:00.715982   83900 kubeadm.go:392] StartCluster: {Name:embed-certs-825613 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-825613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:00.716059   83900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:59:00.716094   83900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:00.758093   83900 cri.go:89] found id: ""
	I1209 23:59:00.758164   83900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:59:00.767877   83900 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:59:00.767895   83900 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:59:00.767932   83900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:59:00.777172   83900 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:59:00.778578   83900 kubeconfig.go:125] found "embed-certs-825613" server: "https://192.168.50.19:8443"
	I1209 23:59:00.780691   83900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:59:00.790459   83900 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.19
	I1209 23:59:00.790492   83900 kubeadm.go:1160] stopping kube-system containers ...
	I1209 23:59:00.790507   83900 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 23:59:00.790563   83900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:00.831048   83900 cri.go:89] found id: ""
	I1209 23:59:00.831129   83900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 23:59:00.847797   83900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:59:00.857819   83900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:59:00.857841   83900 kubeadm.go:157] found existing configuration files:
	
	I1209 23:59:00.857877   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:59:00.867108   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:59:00.867159   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:59:00.876742   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:59:00.885861   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:59:00.885942   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:59:00.895651   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:59:00.904027   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:59:00.904092   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:59:00.912956   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:59:00.921647   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:59:00.921704   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:59:00.930445   83900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:59:00.939496   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:01.049561   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.012953   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.234060   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.302650   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.378572   83900 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:59:02.378677   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:02.878755   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:03.379050   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:03.879215   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:02.880033   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:02.880532   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:02.880560   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:02.880488   85338 retry.go:31] will retry after 2.112793708s: waiting for machine to come up
	I1209 23:59:04.994713   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:04.995223   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:04.995254   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:04.995183   85338 retry.go:31] will retry after 2.420394988s: waiting for machine to come up
	I1209 23:59:07.416846   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:07.417309   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:07.417338   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:07.417281   85338 retry.go:31] will retry after 2.479420656s: waiting for machine to come up
	I1209 23:59:04.379735   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:04.406446   83900 api_server.go:72] duration metric: took 2.027872494s to wait for apiserver process to appear ...
	I1209 23:59:04.406480   83900 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:59:04.406506   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:04.407056   83900 api_server.go:269] stopped: https://192.168.50.19:8443/healthz: Get "https://192.168.50.19:8443/healthz": dial tcp 192.168.50.19:8443: connect: connection refused
	I1209 23:59:04.907381   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:06.967755   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:06.967791   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:06.967808   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:07.003261   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:07.003295   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:07.406692   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:07.411139   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:07.411168   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:07.906689   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:07.911136   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:07.911176   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:08.406697   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:08.411051   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 200:
	ok
	I1209 23:59:08.422472   83900 api_server.go:141] control plane version: v1.31.2
	I1209 23:59:08.422506   83900 api_server.go:131] duration metric: took 4.01601823s to wait for apiserver health ...
	I1209 23:59:08.422532   83900 cni.go:84] Creating CNI manager for ""
	I1209 23:59:08.422541   83900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:08.424238   83900 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 23:59:08.425539   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 23:59:08.435424   83900 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 23:59:08.451320   83900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:59:08.461244   83900 system_pods.go:59] 8 kube-system pods found
	I1209 23:59:08.461285   83900 system_pods.go:61] "coredns-7c65d6cfc9-qvtlr" [1ef5707d-5f06-46d0-809b-da79f79c49e5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 23:59:08.461296   83900 system_pods.go:61] "etcd-embed-certs-825613" [3213929f-0200-4e7d-9b69-967463bfc194] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 23:59:08.461305   83900 system_pods.go:61] "kube-apiserver-embed-certs-825613" [d64b8c55-7da1-4b8e-864e-0727676b17a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 23:59:08.461313   83900 system_pods.go:61] "kube-controller-manager-embed-certs-825613" [61372146-3fe1-4f4e-9d8e-d85cc5ac8369] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 23:59:08.461322   83900 system_pods.go:61] "kube-proxy-rn6fg" [6db02558-bfa6-4c5f-a120-aed13575b273] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 23:59:08.461329   83900 system_pods.go:61] "kube-scheduler-embed-certs-825613" [b9c75ecd-add3-4a19-86e0-08326eea0f6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 23:59:08.461346   83900 system_pods.go:61] "metrics-server-6867b74b74-hg7c5" [2a657b1b-4435-42b5-aef2-deebf7865c83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 23:59:08.461358   83900 system_pods.go:61] "storage-provisioner" [e5cabae9-bb71-4b5e-9a43-0dce4a0733a1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 23:59:08.461368   83900 system_pods.go:74] duration metric: took 10.019045ms to wait for pod list to return data ...
	I1209 23:59:08.461381   83900 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:59:08.464945   83900 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 23:59:08.464973   83900 node_conditions.go:123] node cpu capacity is 2
	I1209 23:59:08.464986   83900 node_conditions.go:105] duration metric: took 3.600013ms to run NodePressure ...
	I1209 23:59:08.465011   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:08.762283   83900 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 23:59:08.766618   83900 kubeadm.go:739] kubelet initialised
	I1209 23:59:08.766640   83900 kubeadm.go:740] duration metric: took 4.332483ms waiting for restarted kubelet to initialise ...
	I1209 23:59:08.766648   83900 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:08.771039   83900 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.775795   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.775820   83900 pod_ready.go:82] duration metric: took 4.756744ms for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.775829   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.775836   83900 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.780473   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "etcd-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.780498   83900 pod_ready.go:82] duration metric: took 4.651756ms for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.780507   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "etcd-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.780517   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.785226   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.785252   83900 pod_ready.go:82] duration metric: took 4.725948ms for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.785261   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.785268   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.855086   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.855117   83900 pod_ready.go:82] duration metric: took 69.839948ms for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.855129   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.855141   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:09.255415   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-proxy-rn6fg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.255451   83900 pod_ready.go:82] duration metric: took 400.29383ms for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:09.255461   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-proxy-rn6fg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.255467   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:09.654750   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.654775   83900 pod_ready.go:82] duration metric: took 399.301549ms for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:09.654785   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.654792   83900 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:10.054952   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:10.054979   83900 pod_ready.go:82] duration metric: took 400.177675ms for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:10.054995   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:10.055002   83900 pod_ready.go:39] duration metric: took 1.288346997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:10.055019   83900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 23:59:10.066533   83900 ops.go:34] apiserver oom_adj: -16
	I1209 23:59:10.066559   83900 kubeadm.go:597] duration metric: took 9.298658158s to restartPrimaryControlPlane
	I1209 23:59:10.066570   83900 kubeadm.go:394] duration metric: took 9.350595042s to StartCluster
	I1209 23:59:10.066590   83900 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:10.066674   83900 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:59:10.068469   83900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:10.068732   83900 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:59:10.068795   83900 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 23:59:10.068901   83900 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-825613"
	I1209 23:59:10.068925   83900 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-825613"
	W1209 23:59:10.068932   83900 addons.go:243] addon storage-provisioner should already be in state true
	I1209 23:59:10.068966   83900 config.go:182] Loaded profile config "embed-certs-825613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:59:10.068960   83900 addons.go:69] Setting default-storageclass=true in profile "embed-certs-825613"
	I1209 23:59:10.068969   83900 host.go:66] Checking if "embed-certs-825613" exists ...
	I1209 23:59:10.069011   83900 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-825613"
	I1209 23:59:10.068972   83900 addons.go:69] Setting metrics-server=true in profile "embed-certs-825613"
	I1209 23:59:10.069584   83900 addons.go:234] Setting addon metrics-server=true in "embed-certs-825613"
	W1209 23:59:10.069616   83900 addons.go:243] addon metrics-server should already be in state true
	I1209 23:59:10.069644   83900 host.go:66] Checking if "embed-certs-825613" exists ...
	I1209 23:59:10.070176   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.070220   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.070219   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.070255   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.070947   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.071024   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.071039   83900 out.go:177] * Verifying Kubernetes components...
	I1209 23:59:10.073122   83900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:10.085793   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I1209 23:59:10.086012   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I1209 23:59:10.086282   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.086412   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.086794   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.086823   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.086907   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.086930   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.087156   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38521
	I1209 23:59:10.087160   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.087251   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.087469   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.087617   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.087792   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.087828   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.088155   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.088177   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.088692   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.089323   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.089358   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.090339   83900 addons.go:234] Setting addon default-storageclass=true in "embed-certs-825613"
	W1209 23:59:10.090357   83900 addons.go:243] addon default-storageclass should already be in state true
	I1209 23:59:10.090379   83900 host.go:66] Checking if "embed-certs-825613" exists ...
	I1209 23:59:10.090609   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.090639   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.103251   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44103
	I1209 23:59:10.103791   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.104305   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.104325   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.104376   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37713
	I1209 23:59:10.104560   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
	I1209 23:59:10.104713   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.104736   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.104848   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.104854   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.105269   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.105285   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.105393   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.105414   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.105595   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.105751   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.105784   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.106203   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.106235   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.107268   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:59:10.107710   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:59:10.109781   83900 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:10.109863   83900 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 23:59:10.111198   83900 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:59:10.111210   83900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 23:59:10.111224   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:59:10.111294   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 23:59:10.111300   83900 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 23:59:10.111309   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:59:10.114610   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.114929   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.114962   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:59:10.114980   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.115159   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:59:10.115318   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:59:10.115438   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:59:10.115474   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:59:10.115516   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.115599   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:59:10.115704   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:59:10.115844   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:59:10.115944   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:59:10.116149   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:59:10.123450   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1209 23:59:10.123926   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.124406   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.124445   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.124744   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.124936   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.126437   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:59:10.126619   83900 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 23:59:10.126637   83900 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 23:59:10.126656   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:59:10.129218   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.129663   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:59:10.129695   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.129874   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:59:10.130055   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:59:10.130164   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:59:10.130296   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:59:10.272711   83900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:10.290121   83900 node_ready.go:35] waiting up to 6m0s for node "embed-certs-825613" to be "Ready" ...
	I1209 23:59:10.379383   83900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:59:10.397672   83900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 23:59:10.403543   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 23:59:10.403591   83900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 23:59:10.439890   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 23:59:10.439915   83900 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 23:59:10.482300   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:59:10.482331   83900 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 23:59:10.550923   83900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:59:11.613572   83900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.234149831s)
	I1209 23:59:11.613622   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.613634   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.613655   83900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.21594498s)
	I1209 23:59:11.613701   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.613713   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.613929   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.613976   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.613992   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.613979   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.614006   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.614018   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.614032   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.614052   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.614000   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.614085   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.614332   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.614343   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.614343   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.614361   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.614362   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.614372   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.621731   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.621749   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.622001   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.622017   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.664217   83900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113253318s)
	I1209 23:59:11.664273   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.664288   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.664633   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.664637   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.664649   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.664658   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.664676   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.664875   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.664888   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.664903   83900 addons.go:475] Verifying addon metrics-server=true in "embed-certs-825613"
	I1209 23:59:11.666814   83900 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1209 23:59:09.899886   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:09.900284   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:09.900314   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:09.900262   85338 retry.go:31] will retry after 3.641244983s: waiting for machine to come up
	I1209 23:59:11.668002   83900 addons.go:510] duration metric: took 1.599215886s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1209 23:59:12.293475   83900 node_ready.go:53] node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:14.960350   84547 start.go:364] duration metric: took 3m49.343321897s to acquireMachinesLock for "old-k8s-version-720064"
	I1209 23:59:14.960428   84547 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:59:14.960440   84547 fix.go:54] fixHost starting: 
	I1209 23:59:14.960886   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:14.960950   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:14.981976   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37057
	I1209 23:59:14.982425   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:14.982933   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:59:14.982966   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:14.983341   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:14.983587   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:14.983772   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetState
	I1209 23:59:14.985748   84547 fix.go:112] recreateIfNeeded on old-k8s-version-720064: state=Stopped err=<nil>
	I1209 23:59:14.985774   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	W1209 23:59:14.985968   84547 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:59:14.987652   84547 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-720064" ...
	I1209 23:59:14.988869   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .Start
	I1209 23:59:14.989086   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring networks are active...
	I1209 23:59:14.989817   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring network default is active
	I1209 23:59:14.990188   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring network mk-old-k8s-version-720064 is active
	I1209 23:59:14.990640   84547 main.go:141] libmachine: (old-k8s-version-720064) Getting domain xml...
	I1209 23:59:14.991392   84547 main.go:141] libmachine: (old-k8s-version-720064) Creating domain...
	I1209 23:59:13.544971   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.545381   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Found IP for machine: 192.168.72.54
	I1209 23:59:13.545408   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has current primary IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.545418   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Reserving static IP address...
	I1209 23:59:13.545913   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-871210", mac: "52:54:00:5e:5b:a3", ip: "192.168.72.54"} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.545942   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | skip adding static IP to network mk-default-k8s-diff-port-871210 - found existing host DHCP lease matching {name: "default-k8s-diff-port-871210", mac: "52:54:00:5e:5b:a3", ip: "192.168.72.54"}
	I1209 23:59:13.545957   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Reserved static IP address: 192.168.72.54
	I1209 23:59:13.545973   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for SSH to be available...
	I1209 23:59:13.545988   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Getting to WaitForSSH function...
	I1209 23:59:13.548279   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.548641   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.548700   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.548788   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Using SSH client type: external
	I1209 23:59:13.548812   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa (-rw-------)
	I1209 23:59:13.548882   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:59:13.548908   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | About to run SSH command:
	I1209 23:59:13.548923   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | exit 0
	I1209 23:59:13.687704   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | SSH cmd err, output: <nil>: 
	I1209 23:59:13.688021   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetConfigRaw
	I1209 23:59:13.688673   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:13.690853   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.691187   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.691224   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.691421   84259 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/config.json ...
	I1209 23:59:13.691646   84259 machine.go:93] provisionDockerMachine start ...
	I1209 23:59:13.691665   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:13.691901   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:13.693958   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.694230   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.694257   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.694392   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:13.694602   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.694771   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.694913   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:13.695093   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:13.695278   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:13.695288   84259 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:59:13.803602   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:59:13.803629   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetMachineName
	I1209 23:59:13.803904   84259 buildroot.go:166] provisioning hostname "default-k8s-diff-port-871210"
	I1209 23:59:13.803928   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetMachineName
	I1209 23:59:13.804106   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:13.806369   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.806769   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.806800   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.806906   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:13.807053   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.807222   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.807329   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:13.807551   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:13.807751   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:13.807765   84259 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-871210 && echo "default-k8s-diff-port-871210" | sudo tee /etc/hostname
	I1209 23:59:13.932849   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-871210
	
	I1209 23:59:13.932874   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:13.935573   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.935943   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.935972   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.936143   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:13.936388   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.936546   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.936682   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:13.936840   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:13.937016   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:13.937046   84259 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-871210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-871210/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-871210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:59:14.057493   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:59:14.057526   84259 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:59:14.057565   84259 buildroot.go:174] setting up certificates
	I1209 23:59:14.057575   84259 provision.go:84] configureAuth start
	I1209 23:59:14.057586   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetMachineName
	I1209 23:59:14.057860   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:14.060619   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.060947   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.060973   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.061170   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.063591   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.063919   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.063946   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.064133   84259 provision.go:143] copyHostCerts
	I1209 23:59:14.064180   84259 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:59:14.064189   84259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:59:14.064242   84259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:59:14.064333   84259 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:59:14.064341   84259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:59:14.064364   84259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:59:14.064416   84259 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:59:14.064424   84259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:59:14.064442   84259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:59:14.064488   84259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-871210 san=[127.0.0.1 192.168.72.54 default-k8s-diff-port-871210 localhost minikube]
	I1209 23:59:14.332090   84259 provision.go:177] copyRemoteCerts
	I1209 23:59:14.332148   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:59:14.332174   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.334856   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.335275   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.335309   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.335428   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.335647   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.335860   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.336002   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:14.422641   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:59:14.445943   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1209 23:59:14.468188   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:59:14.490678   84259 provision.go:87] duration metric: took 433.06125ms to configureAuth
	I1209 23:59:14.490710   84259 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:59:14.490883   84259 config.go:182] Loaded profile config "default-k8s-diff-port-871210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:59:14.490953   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.493529   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.493858   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.493879   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.494073   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.494276   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.494435   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.494555   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.494716   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:14.494880   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:14.494899   84259 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:59:14.720424   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:59:14.720452   84259 machine.go:96] duration metric: took 1.028790944s to provisionDockerMachine
	I1209 23:59:14.720468   84259 start.go:293] postStartSetup for "default-k8s-diff-port-871210" (driver="kvm2")
	I1209 23:59:14.720480   84259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:59:14.720496   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.720814   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:59:14.720844   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.723417   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.723817   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.723851   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.723992   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.724171   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.724328   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.724453   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:14.810295   84259 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:59:14.814400   84259 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:59:14.814424   84259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:59:14.814501   84259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:59:14.814595   84259 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:59:14.814706   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:59:14.823765   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:14.847185   84259 start.go:296] duration metric: took 126.701807ms for postStartSetup
	I1209 23:59:14.847226   84259 fix.go:56] duration metric: took 19.803160459s for fixHost
	I1209 23:59:14.847245   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.850067   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.850488   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.850516   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.850697   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.850915   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.851082   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.851218   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.851369   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:14.851558   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:14.851588   84259 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:59:14.960167   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788754.933814431
	
	I1209 23:59:14.960190   84259 fix.go:216] guest clock: 1733788754.933814431
	I1209 23:59:14.960198   84259 fix.go:229] Guest: 2024-12-09 23:59:14.933814431 +0000 UTC Remote: 2024-12-09 23:59:14.847230309 +0000 UTC m=+262.142902203 (delta=86.584122ms)
	I1209 23:59:14.960217   84259 fix.go:200] guest clock delta is within tolerance: 86.584122ms
	I1209 23:59:14.960222   84259 start.go:83] releasing machines lock for "default-k8s-diff-port-871210", held for 19.916185392s
	I1209 23:59:14.960244   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.960544   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:14.963243   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.963653   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.963693   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.963820   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.964285   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.964505   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.964612   84259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:59:14.964689   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.964749   84259 ssh_runner.go:195] Run: cat /version.json
	I1209 23:59:14.964772   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.967430   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.967811   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.967844   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.967871   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.968018   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.968194   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.968388   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.968405   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.968427   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.968563   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.968562   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:14.968706   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.968878   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.969038   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:15.072098   84259 ssh_runner.go:195] Run: systemctl --version
	I1209 23:59:15.077846   84259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:59:15.222108   84259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:59:15.230013   84259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:59:15.230097   84259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:59:15.247065   84259 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:59:15.247091   84259 start.go:495] detecting cgroup driver to use...
	I1209 23:59:15.247168   84259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:59:15.268262   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:59:15.283824   84259 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:59:15.283893   84259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:59:15.299384   84259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:59:15.318727   84259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:59:15.457157   84259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:59:15.642629   84259 docker.go:233] disabling docker service ...
	I1209 23:59:15.642718   84259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:59:15.661646   84259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:59:15.681887   84259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:59:15.838344   84259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:59:15.971028   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:59:15.984896   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:59:16.006631   84259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:59:16.006691   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.019058   84259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:59:16.019143   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.031838   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.043176   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.057386   84259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:59:16.068694   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.083326   84259 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.102288   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.113538   84259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:59:16.126450   84259 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:59:16.126516   84259 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:59:16.147057   84259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:59:16.157329   84259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:16.285726   84259 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:59:16.385931   84259 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:59:16.386014   84259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:59:16.390840   84259 start.go:563] Will wait 60s for crictl version
	I1209 23:59:16.390912   84259 ssh_runner.go:195] Run: which crictl
	I1209 23:59:16.394870   84259 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:59:16.438893   84259 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:59:16.438986   84259 ssh_runner.go:195] Run: crio --version
	I1209 23:59:16.470118   84259 ssh_runner.go:195] Run: crio --version
	I1209 23:59:16.499603   84259 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:59:16.500665   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:16.503766   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:16.504242   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:16.504329   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:16.504609   84259 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1209 23:59:16.508742   84259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:16.520816   84259 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-871210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-871210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:59:16.520944   84259 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:59:16.520988   84259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:16.563220   84259 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:59:16.563315   84259 ssh_runner.go:195] Run: which lz4
	I1209 23:59:16.568962   84259 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:59:16.573715   84259 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:59:16.573756   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 23:59:14.294886   83900 node_ready.go:53] node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:16.796597   83900 node_ready.go:53] node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:17.795347   83900 node_ready.go:49] node "embed-certs-825613" has status "Ready":"True"
	I1209 23:59:17.795381   83900 node_ready.go:38] duration metric: took 7.505214814s for node "embed-certs-825613" to be "Ready" ...
	I1209 23:59:17.795394   83900 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:17.801650   83900 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:17.808087   83900 pod_ready.go:93] pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:17.808113   83900 pod_ready.go:82] duration metric: took 6.437717ms for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:17.808127   83900 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:16.350842   84547 main.go:141] libmachine: (old-k8s-version-720064) Waiting to get IP...
	I1209 23:59:16.351883   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.352326   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.352393   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.352304   85519 retry.go:31] will retry after 268.149849ms: waiting for machine to come up
	I1209 23:59:16.621742   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.622116   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.622145   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.622066   85519 retry.go:31] will retry after 365.051996ms: waiting for machine to come up
	I1209 23:59:16.988590   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.989124   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.989154   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.989089   85519 retry.go:31] will retry after 441.697933ms: waiting for machine to come up
	I1209 23:59:17.432962   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:17.433453   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:17.433482   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:17.433356   85519 retry.go:31] will retry after 503.173846ms: waiting for machine to come up
	I1209 23:59:17.938107   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:17.938576   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:17.938610   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:17.938533   85519 retry.go:31] will retry after 476.993358ms: waiting for machine to come up
	I1209 23:59:18.417462   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:18.418037   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:18.418064   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:18.417989   85519 retry.go:31] will retry after 732.449849ms: waiting for machine to come up
	I1209 23:59:19.152120   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:19.152680   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:19.152708   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:19.152624   85519 retry.go:31] will retry after 764.17794ms: waiting for machine to come up
	I1209 23:59:19.918630   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:19.919113   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:19.919141   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:19.919066   85519 retry.go:31] will retry after 1.072352346s: waiting for machine to come up
	I1209 23:59:17.907764   84259 crio.go:462] duration metric: took 1.338836821s to copy over tarball
	I1209 23:59:17.907846   84259 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:59:20.133171   84259 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.22529043s)
	I1209 23:59:20.133214   84259 crio.go:469] duration metric: took 2.225420822s to extract the tarball
	I1209 23:59:20.133224   84259 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:59:20.169786   84259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:20.211990   84259 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:59:20.212020   84259 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:59:20.212030   84259 kubeadm.go:934] updating node { 192.168.72.54 8444 v1.31.2 crio true true} ...
	I1209 23:59:20.212166   84259 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-871210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-871210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:59:20.212247   84259 ssh_runner.go:195] Run: crio config
	I1209 23:59:20.255141   84259 cni.go:84] Creating CNI manager for ""
	I1209 23:59:20.255169   84259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:20.255182   84259 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:59:20.255215   84259 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.54 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-871210 NodeName:default-k8s-diff-port-871210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:59:20.255338   84259 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.54
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-871210"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.54"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.54"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:59:20.255407   84259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:59:20.265160   84259 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:59:20.265244   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:59:20.275799   84259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1209 23:59:20.292089   84259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:59:20.308054   84259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1209 23:59:20.327464   84259 ssh_runner.go:195] Run: grep 192.168.72.54	control-plane.minikube.internal$ /etc/hosts
	I1209 23:59:20.331101   84259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:20.343232   84259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:20.473225   84259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:20.492069   84259 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210 for IP: 192.168.72.54
	I1209 23:59:20.492098   84259 certs.go:194] generating shared ca certs ...
	I1209 23:59:20.492119   84259 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:20.492311   84259 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:59:20.492363   84259 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:59:20.492378   84259 certs.go:256] generating profile certs ...
	I1209 23:59:20.492499   84259 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/client.key
	I1209 23:59:20.492605   84259 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/apiserver.key.9cc284f2
	I1209 23:59:20.492663   84259 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/proxy-client.key
	I1209 23:59:20.492831   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:59:20.492872   84259 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:59:20.492886   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:59:20.492918   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:59:20.492951   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:59:20.492997   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:59:20.493071   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:20.494023   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:59:20.543769   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:59:20.583697   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:59:20.615170   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:59:20.638784   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1209 23:59:20.673708   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 23:59:20.696690   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:59:20.719682   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:59:20.744292   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:59:20.767643   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:59:20.790927   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:59:20.816746   84259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:59:20.834150   84259 ssh_runner.go:195] Run: openssl version
	I1209 23:59:20.840153   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:59:20.851006   84259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:59:20.855430   84259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:59:20.855492   84259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:59:20.860866   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:59:20.871983   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:59:20.882642   84259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:20.886978   84259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:20.887050   84259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:20.892472   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:59:20.902959   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:59:20.913774   84259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:59:20.918344   84259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:59:20.918394   84259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:59:20.923981   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:59:20.934931   84259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:59:20.939662   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:59:20.945633   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:59:20.951650   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:59:20.957628   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:59:20.963600   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:59:20.969399   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:59:20.974992   84259 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-871210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-871210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:20.975088   84259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:59:20.975143   84259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:21.012553   84259 cri.go:89] found id: ""
	I1209 23:59:21.012647   84259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:59:21.023728   84259 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:59:21.023758   84259 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:59:21.023808   84259 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:59:21.033788   84259 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:59:21.034806   84259 kubeconfig.go:125] found "default-k8s-diff-port-871210" server: "https://192.168.72.54:8444"
	I1209 23:59:21.036852   84259 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:59:21.046251   84259 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.54
	I1209 23:59:21.046280   84259 kubeadm.go:1160] stopping kube-system containers ...
	I1209 23:59:21.046292   84259 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 23:59:21.046354   84259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:21.084915   84259 cri.go:89] found id: ""
	I1209 23:59:21.085000   84259 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 23:59:21.100592   84259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:59:21.111180   84259 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:59:21.111204   84259 kubeadm.go:157] found existing configuration files:
	
	I1209 23:59:21.111254   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1209 23:59:21.120027   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:59:21.120087   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:59:21.129165   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1209 23:59:21.138254   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:59:21.138315   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:59:21.147276   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1209 23:59:21.155635   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:59:21.155711   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:59:21.164794   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1209 23:59:21.173421   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:59:21.173477   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:59:21.182615   84259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:59:21.191795   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:21.289295   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.415644   84259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.126305379s)
	I1209 23:59:22.415695   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.616211   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.672571   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.731282   84259 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:59:22.731363   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:19.816966   83900 pod_ready.go:103] pod "etcd-embed-certs-825613" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:21.814625   83900 pod_ready.go:93] pod "etcd-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.814650   83900 pod_ready.go:82] duration metric: took 4.006513871s for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.814662   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.819611   83900 pod_ready.go:93] pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.819634   83900 pod_ready.go:82] duration metric: took 4.964081ms for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.819647   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.823994   83900 pod_ready.go:93] pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.824016   83900 pod_ready.go:82] duration metric: took 4.360902ms for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.824028   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.829102   83900 pod_ready.go:93] pod "kube-proxy-rn6fg" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.829128   83900 pod_ready.go:82] duration metric: took 5.090595ms for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.829161   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:23.983548   83900 pod_ready.go:93] pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:23.983603   83900 pod_ready.go:82] duration metric: took 2.154427639s for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:23.983620   83900 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:20.992747   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:20.993138   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:20.993196   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:20.993111   85519 retry.go:31] will retry after 1.479639772s: waiting for machine to come up
	I1209 23:59:22.474889   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:22.475414   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:22.475445   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:22.475364   85519 retry.go:31] will retry after 2.248463233s: waiting for machine to come up
	I1209 23:59:24.725307   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:24.725765   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:24.725796   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:24.725706   85519 retry.go:31] will retry after 2.803512929s: waiting for machine to come up
	I1209 23:59:23.231575   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:23.731704   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:24.231740   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:24.265107   84259 api_server.go:72] duration metric: took 1.533824471s to wait for apiserver process to appear ...
	I1209 23:59:24.265144   84259 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:59:24.265173   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:24.265664   84259 api_server.go:269] stopped: https://192.168.72.54:8444/healthz: Get "https://192.168.72.54:8444/healthz": dial tcp 192.168.72.54:8444: connect: connection refused
	I1209 23:59:24.765571   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.251990   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:27.252018   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:27.252033   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.284733   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:27.284769   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:27.284781   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.313967   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:27.313998   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:25.990720   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:28.490332   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:27.530567   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:27.531086   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:27.531120   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:27.531022   85519 retry.go:31] will retry after 2.800227475s: waiting for machine to come up
	I1209 23:59:30.333201   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:30.333660   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:30.333684   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:30.333621   85519 retry.go:31] will retry after 3.041733113s: waiting for machine to come up
	I1209 23:59:27.765806   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.773528   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:27.773564   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:28.266012   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:28.275940   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:28.275965   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:28.765502   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:28.770695   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:28.770725   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:29.265309   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:29.270844   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:29.270883   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:29.765323   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:29.769949   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:29.769974   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:30.265981   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:30.270284   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:30.270313   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:30.765915   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:30.771542   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I1209 23:59:30.781357   84259 api_server.go:141] control plane version: v1.31.2
	I1209 23:59:30.781390   84259 api_server.go:131] duration metric: took 6.516238077s to wait for apiserver health ...
	I1209 23:59:30.781400   84259 cni.go:84] Creating CNI manager for ""
	I1209 23:59:30.781409   84259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:30.783438   84259 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 23:59:30.784794   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 23:59:30.794916   84259 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 23:59:30.812099   84259 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:59:30.822915   84259 system_pods.go:59] 8 kube-system pods found
	I1209 23:59:30.822967   84259 system_pods.go:61] "coredns-7c65d6cfc9-wclgl" [22b2e24e-5a03-4d4f-a071-68e414aaf6cf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 23:59:30.822980   84259 system_pods.go:61] "etcd-default-k8s-diff-port-871210" [2058a33f-dd4b-42bc-94d9-4cd130b9389c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 23:59:30.822990   84259 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-871210" [26ffd01d-48b5-4e38-a91b-bf75dd75e3c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 23:59:30.823000   84259 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-871210" [cea3a810-c7e9-48d6-9fd7-1f6258c45387] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 23:59:30.823006   84259 system_pods.go:61] "kube-proxy-d7sxm" [e28ccb5f-a282-4371-ae1a-fca52dd58616] Running
	I1209 23:59:30.823021   84259 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-871210" [790619f6-29a2-4bbc-89ba-a7b93653459b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 23:59:30.823033   84259 system_pods.go:61] "metrics-server-6867b74b74-lgzdz" [2c251249-58b6-44c4-929a-0f5c963d83b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 23:59:30.823039   84259 system_pods.go:61] "storage-provisioner" [63078ee5-81b8-4548-88bf-2b89857ea01a] Running
	I1209 23:59:30.823050   84259 system_pods.go:74] duration metric: took 10.914419ms to wait for pod list to return data ...
	I1209 23:59:30.823063   84259 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:59:30.828012   84259 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 23:59:30.828062   84259 node_conditions.go:123] node cpu capacity is 2
	I1209 23:59:30.828076   84259 node_conditions.go:105] duration metric: took 5.004481ms to run NodePressure ...
	I1209 23:59:30.828097   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:31.092839   84259 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 23:59:31.097588   84259 kubeadm.go:739] kubelet initialised
	I1209 23:59:31.097613   84259 kubeadm.go:740] duration metric: took 4.744115ms waiting for restarted kubelet to initialise ...
	I1209 23:59:31.097623   84259 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:31.104318   84259 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:30.991511   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:33.490390   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:34.824186   83859 start.go:364] duration metric: took 55.338760885s to acquireMachinesLock for "no-preload-048296"
	I1209 23:59:34.824245   83859 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:59:34.824272   83859 fix.go:54] fixHost starting: 
	I1209 23:59:34.824660   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:34.824713   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:34.842421   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35591
	I1209 23:59:34.842851   83859 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:34.843297   83859 main.go:141] libmachine: Using API Version  1
	I1209 23:59:34.843319   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:34.843701   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:34.843916   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:34.844066   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1209 23:59:34.845768   83859 fix.go:112] recreateIfNeeded on no-preload-048296: state=Stopped err=<nil>
	I1209 23:59:34.845792   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	W1209 23:59:34.845958   83859 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:59:34.847617   83859 out.go:177] * Restarting existing kvm2 VM for "no-preload-048296" ...
	I1209 23:59:33.378848   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.379297   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has current primary IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.379326   84547 main.go:141] libmachine: (old-k8s-version-720064) Found IP for machine: 192.168.39.188
	I1209 23:59:33.379351   84547 main.go:141] libmachine: (old-k8s-version-720064) Reserving static IP address...
	I1209 23:59:33.379701   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "old-k8s-version-720064", mac: "52:54:00:a1:91:00", ip: "192.168.39.188"} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.379722   84547 main.go:141] libmachine: (old-k8s-version-720064) Reserved static IP address: 192.168.39.188
	I1209 23:59:33.379743   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | skip adding static IP to network mk-old-k8s-version-720064 - found existing host DHCP lease matching {name: "old-k8s-version-720064", mac: "52:54:00:a1:91:00", ip: "192.168.39.188"}
	I1209 23:59:33.379759   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Getting to WaitForSSH function...
	I1209 23:59:33.379776   84547 main.go:141] libmachine: (old-k8s-version-720064) Waiting for SSH to be available...
	I1209 23:59:33.381697   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.381990   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.382027   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.382137   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Using SSH client type: external
	I1209 23:59:33.382163   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa (-rw-------)
	I1209 23:59:33.382193   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:59:33.382212   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | About to run SSH command:
	I1209 23:59:33.382229   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | exit 0
	I1209 23:59:33.507653   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | SSH cmd err, output: <nil>: 
	I1209 23:59:33.507957   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetConfigRaw
	I1209 23:59:33.508555   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:33.511020   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.511399   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.511431   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.511695   84547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/config.json ...
	I1209 23:59:33.511932   84547 machine.go:93] provisionDockerMachine start ...
	I1209 23:59:33.511958   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:33.512144   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.514383   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.514722   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.514743   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.514876   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.515048   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.515187   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.515349   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.515596   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.515835   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.515850   84547 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:59:33.623370   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:59:33.623407   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.623670   84547 buildroot.go:166] provisioning hostname "old-k8s-version-720064"
	I1209 23:59:33.623708   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.623909   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.626370   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.626749   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.626781   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.626943   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.627172   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.627348   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.627490   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.627666   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.627868   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.627884   84547 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-720064 && echo "old-k8s-version-720064" | sudo tee /etc/hostname
	I1209 23:59:33.749338   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-720064
	
	I1209 23:59:33.749370   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.752401   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.752774   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.752807   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.753000   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.753180   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.753372   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.753531   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.753674   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.753837   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.753853   84547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-720064' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-720064/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-720064' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:59:33.872512   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:59:33.872548   84547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:59:33.872575   84547 buildroot.go:174] setting up certificates
	I1209 23:59:33.872585   84547 provision.go:84] configureAuth start
	I1209 23:59:33.872597   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.872906   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:33.875719   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.876012   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.876051   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.876210   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.878539   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.878857   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.878894   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.879025   84547 provision.go:143] copyHostCerts
	I1209 23:59:33.879087   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:59:33.879100   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:59:33.879166   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:59:33.879254   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:59:33.879262   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:59:33.879283   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:59:33.879338   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:59:33.879346   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:59:33.879365   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:59:33.879414   84547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-720064 san=[127.0.0.1 192.168.39.188 localhost minikube old-k8s-version-720064]
	I1209 23:59:34.211417   84547 provision.go:177] copyRemoteCerts
	I1209 23:59:34.211475   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:59:34.211504   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.214285   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.214649   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.214686   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.214789   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.215006   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.215180   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.215345   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.301399   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:59:34.326216   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:59:34.349136   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 23:59:34.371597   84547 provision.go:87] duration metric: took 498.999263ms to configureAuth
	I1209 23:59:34.371633   84547 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:59:34.371832   84547 config.go:182] Loaded profile config "old-k8s-version-720064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 23:59:34.371902   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.374649   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.374985   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.375031   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.375161   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.375368   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.375541   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.375735   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.375897   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:34.376128   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:34.376152   84547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:59:34.588357   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:59:34.588391   84547 machine.go:96] duration metric: took 1.076442399s to provisionDockerMachine
	I1209 23:59:34.588402   84547 start.go:293] postStartSetup for "old-k8s-version-720064" (driver="kvm2")
	I1209 23:59:34.588412   84547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:59:34.588443   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.588749   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:59:34.588772   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.591413   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.591805   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.591836   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.592001   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.592176   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.592325   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.592431   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.673445   84547 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:59:34.677394   84547 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:59:34.677421   84547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:59:34.677498   84547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:59:34.677600   84547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:59:34.677716   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:59:34.686261   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:34.709412   84547 start.go:296] duration metric: took 120.993275ms for postStartSetup
	I1209 23:59:34.709467   84547 fix.go:56] duration metric: took 19.749026723s for fixHost
	I1209 23:59:34.709495   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.712458   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.712762   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.712796   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.712930   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.713158   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.713335   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.713506   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.713690   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:34.713852   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:34.713862   84547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:59:34.823993   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788774.778513264
	
	I1209 23:59:34.824022   84547 fix.go:216] guest clock: 1733788774.778513264
	I1209 23:59:34.824034   84547 fix.go:229] Guest: 2024-12-09 23:59:34.778513264 +0000 UTC Remote: 2024-12-09 23:59:34.709473288 +0000 UTC m=+249.236536627 (delta=69.039976ms)
	I1209 23:59:34.824067   84547 fix.go:200] guest clock delta is within tolerance: 69.039976ms
	I1209 23:59:34.824075   84547 start.go:83] releasing machines lock for "old-k8s-version-720064", held for 19.863672525s
	I1209 23:59:34.824110   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.824391   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:34.827165   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.827596   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.827629   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.827894   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828467   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828690   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828802   84547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:59:34.828865   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.828888   84547 ssh_runner.go:195] Run: cat /version.json
	I1209 23:59:34.828917   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.831978   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832052   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832514   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.832548   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.832572   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832748   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832751   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.832899   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.832982   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.832986   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.833198   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.833217   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.833370   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.833374   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.949044   84547 ssh_runner.go:195] Run: systemctl --version
	I1209 23:59:34.954999   84547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:59:35.100096   84547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:59:35.105764   84547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:59:35.105852   84547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:59:35.126893   84547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:59:35.126915   84547 start.go:495] detecting cgroup driver to use...
	I1209 23:59:35.126968   84547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:59:35.143236   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:59:35.157019   84547 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:59:35.157098   84547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:59:35.170608   84547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:59:35.184816   84547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:59:35.310043   84547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:59:35.486472   84547 docker.go:233] disabling docker service ...
	I1209 23:59:35.486653   84547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:59:35.501938   84547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:59:35.516997   84547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:59:35.667304   84547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:59:35.821316   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:59:35.836423   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:59:35.854633   84547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1209 23:59:35.854719   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.865592   84547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:59:35.865677   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.877247   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.889007   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.900196   84547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:59:35.912403   84547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:59:35.922919   84547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:59:35.922987   84547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:59:35.939439   84547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:59:35.949715   84547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:36.100115   84547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:59:36.207129   84547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:59:36.207206   84547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:59:36.213672   84547 start.go:563] Will wait 60s for crictl version
	I1209 23:59:36.213758   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:36.218204   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:59:36.255262   84547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:59:36.255367   84547 ssh_runner.go:195] Run: crio --version
	I1209 23:59:36.286277   84547 ssh_runner.go:195] Run: crio --version
	I1209 23:59:36.317334   84547 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1209 23:59:34.848733   83859 main.go:141] libmachine: (no-preload-048296) Calling .Start
	I1209 23:59:34.848943   83859 main.go:141] libmachine: (no-preload-048296) Ensuring networks are active...
	I1209 23:59:34.849641   83859 main.go:141] libmachine: (no-preload-048296) Ensuring network default is active
	I1209 23:59:34.850039   83859 main.go:141] libmachine: (no-preload-048296) Ensuring network mk-no-preload-048296 is active
	I1209 23:59:34.850540   83859 main.go:141] libmachine: (no-preload-048296) Getting domain xml...
	I1209 23:59:34.851224   83859 main.go:141] libmachine: (no-preload-048296) Creating domain...
	I1209 23:59:36.270761   83859 main.go:141] libmachine: (no-preload-048296) Waiting to get IP...
	I1209 23:59:36.271970   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:36.272578   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:36.272645   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:36.272536   85657 retry.go:31] will retry after 253.181092ms: waiting for machine to come up
	I1209 23:59:36.527128   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:36.527735   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:36.527762   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:36.527683   85657 retry.go:31] will retry after 297.608817ms: waiting for machine to come up
	I1209 23:59:36.827372   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:36.828819   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:36.828849   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:36.828763   85657 retry.go:31] will retry after 374.112777ms: waiting for machine to come up
	I1209 23:59:33.112105   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:35.611353   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:36.115472   84259 pod_ready.go:93] pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:36.115504   84259 pod_ready.go:82] duration metric: took 5.011153637s for pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:36.115521   84259 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:35.492415   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:37.992287   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:36.318651   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:36.321927   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:36.322336   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:36.322366   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:36.322619   84547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 23:59:36.327938   84547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:36.344152   84547 kubeadm.go:883] updating cluster {Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:59:36.344279   84547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:59:36.344328   84547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:36.391261   84547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 23:59:36.391348   84547 ssh_runner.go:195] Run: which lz4
	I1209 23:59:36.395391   84547 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:59:36.399585   84547 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:59:36.399622   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1209 23:59:37.969251   84547 crio.go:462] duration metric: took 1.573891158s to copy over tarball
	I1209 23:59:37.969362   84547 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:59:37.204687   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:37.205458   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:37.205479   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:37.205372   85657 retry.go:31] will retry after 420.490569ms: waiting for machine to come up
	I1209 23:59:37.627167   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:37.629813   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:37.629839   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:37.629687   85657 retry.go:31] will retry after 561.795207ms: waiting for machine to come up
	I1209 23:59:38.193652   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:38.194178   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:38.194200   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:38.194119   85657 retry.go:31] will retry after 925.018936ms: waiting for machine to come up
	I1209 23:59:39.121476   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:39.122014   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:39.122046   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:39.121958   85657 retry.go:31] will retry after 984.547761ms: waiting for machine to come up
	I1209 23:59:40.108478   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:40.108947   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:40.108981   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:40.108925   85657 retry.go:31] will retry after 1.261498912s: waiting for machine to come up
	I1209 23:59:41.372403   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:41.372901   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:41.372922   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:41.372884   85657 retry.go:31] will retry after 1.498283156s: waiting for machine to come up
	I1209 23:59:38.122964   84259 pod_ready.go:93] pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:38.122990   84259 pod_ready.go:82] duration metric: took 2.007459606s for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:38.123004   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:40.133257   84259 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:40.631911   84259 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:40.631940   84259 pod_ready.go:82] duration metric: took 2.508926956s for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:40.631957   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:40.492044   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:42.991179   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:41.050494   84547 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081073233s)
	I1209 23:59:41.050524   84547 crio.go:469] duration metric: took 3.081237568s to extract the tarball
	I1209 23:59:41.050533   84547 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:59:41.092694   84547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:41.125264   84547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 23:59:41.125289   84547 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 23:59:41.125360   84547 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.125378   84547 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.125388   84547 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.125360   84547 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.125436   84547 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1209 23:59:41.125466   84547 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.125497   84547 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.125515   84547 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.127285   84547 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.127325   84547 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1209 23:59:41.127376   84547 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.127405   84547 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.127421   84547 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.127293   84547 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.127293   84547 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.127300   84547 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.302277   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.314265   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1209 23:59:41.323683   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.326327   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.345042   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.345550   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.362324   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.367612   84547 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1209 23:59:41.367663   84547 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.367706   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.406644   84547 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1209 23:59:41.406691   84547 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1209 23:59:41.406783   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.450490   84547 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1209 23:59:41.450550   84547 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.450601   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.477332   84547 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1209 23:59:41.477389   84547 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.477438   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.499714   84547 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1209 23:59:41.499757   84547 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.499783   84547 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1209 23:59:41.499806   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.499818   84547 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.499861   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.505128   84547 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1209 23:59:41.505179   84547 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.505205   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.505249   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.505212   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.505306   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.505342   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.507860   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.507923   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.643108   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.644251   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.644273   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.644358   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.644400   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.650732   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.650817   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.777981   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.789388   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.789435   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.789491   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.789561   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.789592   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.802784   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.912370   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.914494   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1209 23:59:41.944460   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1209 23:59:41.944564   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1209 23:59:41.944569   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.944660   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1209 23:59:41.944662   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1209 23:59:41.944726   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1209 23:59:42.104003   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1209 23:59:42.104074   84547 cache_images.go:92] duration metric: took 978.7734ms to LoadCachedImages
	W1209 23:59:42.104176   84547 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1209 23:59:42.104198   84547 kubeadm.go:934] updating node { 192.168.39.188 8443 v1.20.0 crio true true} ...
	I1209 23:59:42.104326   84547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-720064 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.188
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:59:42.104431   84547 ssh_runner.go:195] Run: crio config
	I1209 23:59:42.154016   84547 cni.go:84] Creating CNI manager for ""
	I1209 23:59:42.154041   84547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:42.154050   84547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:59:42.154066   84547 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.188 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-720064 NodeName:old-k8s-version-720064 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.188"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.188 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 23:59:42.154226   84547 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.188
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-720064"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.188
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.188"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:59:42.154292   84547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 23:59:42.164866   84547 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:59:42.164943   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:59:42.175137   84547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1209 23:59:42.192176   84547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:59:42.210695   84547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1209 23:59:42.230364   84547 ssh_runner.go:195] Run: grep 192.168.39.188	control-plane.minikube.internal$ /etc/hosts
	I1209 23:59:42.234239   84547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.188	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:42.246747   84547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:42.379643   84547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:42.396626   84547 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064 for IP: 192.168.39.188
	I1209 23:59:42.396713   84547 certs.go:194] generating shared ca certs ...
	I1209 23:59:42.396746   84547 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:42.396942   84547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:59:42.397007   84547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:59:42.397023   84547 certs.go:256] generating profile certs ...
	I1209 23:59:42.397191   84547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/client.key
	I1209 23:59:42.397270   84547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key.abcbe3fa
	I1209 23:59:42.397330   84547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.key
	I1209 23:59:42.397516   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:59:42.397564   84547 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:59:42.397580   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:59:42.397623   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:59:42.397662   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:59:42.397701   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:59:42.397768   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:42.398726   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:59:42.450053   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:59:42.475685   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:59:42.514396   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:59:42.562383   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 23:59:42.589690   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 23:59:42.614149   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:59:42.647112   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:59:42.671117   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:59:42.694303   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:59:42.722563   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:59:42.753478   84547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:59:42.773673   84547 ssh_runner.go:195] Run: openssl version
	I1209 23:59:42.779930   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:59:42.791531   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.796422   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.796536   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.802386   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:59:42.817058   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:59:42.828570   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.833292   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.833357   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.839679   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:59:42.850729   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:59:42.861699   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.866417   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.866479   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.873602   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:59:42.884398   84547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:59:42.889137   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:59:42.896108   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:59:42.902584   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:59:42.908706   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:59:42.914313   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:59:42.919943   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:59:42.925417   84547 kubeadm.go:392] StartCluster: {Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:42.925546   84547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:59:42.925602   84547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:42.962948   84547 cri.go:89] found id: ""
	I1209 23:59:42.963030   84547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:59:42.973746   84547 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:59:42.973768   84547 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:59:42.973819   84547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:59:42.983593   84547 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:59:42.984528   84547 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-720064" does not appear in /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:59:42.985106   84547 kubeconfig.go:62] /home/jenkins/minikube-integration/19888-18950/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-720064" cluster setting kubeconfig missing "old-k8s-version-720064" context setting]
	I1209 23:59:42.985981   84547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:42.996842   84547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:59:43.007858   84547 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.188
	I1209 23:59:43.007901   84547 kubeadm.go:1160] stopping kube-system containers ...
	I1209 23:59:43.007916   84547 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 23:59:43.007982   84547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:43.046416   84547 cri.go:89] found id: ""
	I1209 23:59:43.046495   84547 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 23:59:43.063226   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:59:43.073919   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:59:43.073946   84547 kubeadm.go:157] found existing configuration files:
	
	I1209 23:59:43.073991   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:59:43.084217   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:59:43.084292   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:59:43.094196   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:59:43.104098   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:59:43.104178   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:59:43.115056   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:59:43.124696   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:59:43.124761   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:59:43.135180   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:59:43.146325   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:59:43.146392   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:59:43.156017   84547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:59:43.166584   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:43.376941   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.198180   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.431002   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.549010   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.644736   84547 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:59:44.644827   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:45.145713   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:42.872724   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:42.873229   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:42.873260   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:42.873178   85657 retry.go:31] will retry after 1.469240763s: waiting for machine to come up
	I1209 23:59:44.344825   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:44.345339   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:44.345374   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:44.345317   85657 retry.go:31] will retry after 1.85673524s: waiting for machine to come up
	I1209 23:59:46.203693   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:46.204128   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:46.204152   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:46.204065   85657 retry.go:31] will retry after 3.5180616s: waiting for machine to come up
	I1209 23:59:43.003893   84259 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:43.734778   84259 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:43.734806   84259 pod_ready.go:82] duration metric: took 3.102839103s for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:43.734821   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-d7sxm" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:43.743393   84259 pod_ready.go:93] pod "kube-proxy-d7sxm" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:43.743421   84259 pod_ready.go:82] duration metric: took 8.592191ms for pod "kube-proxy-d7sxm" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:43.743435   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:44.250028   84259 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:44.250051   84259 pod_ready.go:82] duration metric: took 506.607784ms for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:44.250063   84259 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:46.256096   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:45.494189   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:47.993074   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:45.645297   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:46.145300   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:46.644880   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:47.144955   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:47.645081   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:48.145132   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:48.645278   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.144932   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.645874   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:50.145061   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.725528   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:49.726040   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:49.726069   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:49.725982   85657 retry.go:31] will retry after 3.98915487s: waiting for machine to come up
	I1209 23:59:48.256397   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:50.757098   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:50.491110   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:52.989722   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:50.645171   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:51.144908   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:51.645198   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:52.145700   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:52.645665   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.145782   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.645678   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:54.145521   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:54.645825   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:55.145578   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.718634   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.719141   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has current primary IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.719163   83859 main.go:141] libmachine: (no-preload-048296) Found IP for machine: 192.168.61.182
	I1209 23:59:53.719173   83859 main.go:141] libmachine: (no-preload-048296) Reserving static IP address...
	I1209 23:59:53.719594   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "no-preload-048296", mac: "52:54:00:c6:cf:c7", ip: "192.168.61.182"} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.719621   83859 main.go:141] libmachine: (no-preload-048296) DBG | skip adding static IP to network mk-no-preload-048296 - found existing host DHCP lease matching {name: "no-preload-048296", mac: "52:54:00:c6:cf:c7", ip: "192.168.61.182"}
	I1209 23:59:53.719632   83859 main.go:141] libmachine: (no-preload-048296) Reserved static IP address: 192.168.61.182
	I1209 23:59:53.719641   83859 main.go:141] libmachine: (no-preload-048296) Waiting for SSH to be available...
	I1209 23:59:53.719671   83859 main.go:141] libmachine: (no-preload-048296) DBG | Getting to WaitForSSH function...
	I1209 23:59:53.722015   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.722309   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.722342   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.722445   83859 main.go:141] libmachine: (no-preload-048296) DBG | Using SSH client type: external
	I1209 23:59:53.722471   83859 main.go:141] libmachine: (no-preload-048296) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa (-rw-------)
	I1209 23:59:53.722504   83859 main.go:141] libmachine: (no-preload-048296) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:59:53.722517   83859 main.go:141] libmachine: (no-preload-048296) DBG | About to run SSH command:
	I1209 23:59:53.722529   83859 main.go:141] libmachine: (no-preload-048296) DBG | exit 0
	I1209 23:59:53.843261   83859 main.go:141] libmachine: (no-preload-048296) DBG | SSH cmd err, output: <nil>: 
	I1209 23:59:53.843673   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetConfigRaw
	I1209 23:59:53.844367   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:53.846889   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.847170   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.847203   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.847433   83859 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/config.json ...
	I1209 23:59:53.847656   83859 machine.go:93] provisionDockerMachine start ...
	I1209 23:59:53.847675   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:53.847834   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:53.849867   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.850179   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.850213   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.850372   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:53.850548   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.850702   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.850878   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:53.851058   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:53.851288   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:53.851303   83859 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:59:53.947889   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:59:53.947916   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:59:53.948220   83859 buildroot.go:166] provisioning hostname "no-preload-048296"
	I1209 23:59:53.948259   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:59:53.948496   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:53.951070   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.951479   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.951509   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.951729   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:53.951919   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.952120   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.952280   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:53.952456   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:53.952650   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:53.952672   83859 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-048296 && echo "no-preload-048296" | sudo tee /etc/hostname
	I1209 23:59:54.065706   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-048296
	
	I1209 23:59:54.065738   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.068629   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.068942   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.068975   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.069241   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.069493   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.069731   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.069939   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.070156   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:54.070325   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:54.070346   83859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-048296' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-048296/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-048296' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:59:54.180424   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:59:54.180460   83859 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:59:54.180479   83859 buildroot.go:174] setting up certificates
	I1209 23:59:54.180490   83859 provision.go:84] configureAuth start
	I1209 23:59:54.180499   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:59:54.180802   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:54.183349   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.183703   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.183732   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.183852   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.186076   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.186341   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.186372   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.186498   83859 provision.go:143] copyHostCerts
	I1209 23:59:54.186554   83859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:59:54.186564   83859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:59:54.186627   83859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:59:54.186715   83859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:59:54.186723   83859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:59:54.186744   83859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:59:54.186805   83859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:59:54.186814   83859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:59:54.186831   83859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:59:54.186881   83859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.no-preload-048296 san=[127.0.0.1 192.168.61.182 localhost minikube no-preload-048296]
	I1209 23:59:54.291230   83859 provision.go:177] copyRemoteCerts
	I1209 23:59:54.291296   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:59:54.291319   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.293968   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.294257   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.294283   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.294451   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.294654   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.294814   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.294963   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.377572   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:59:54.401222   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 23:59:54.425323   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 23:59:54.448358   83859 provision.go:87] duration metric: took 267.838318ms to configureAuth
	I1209 23:59:54.448387   83859 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:59:54.448563   83859 config.go:182] Loaded profile config "no-preload-048296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:59:54.448629   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.451082   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.451422   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.451450   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.451678   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.451906   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.452088   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.452222   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.452374   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:54.452550   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:54.452567   83859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:59:54.665629   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:59:54.665663   83859 machine.go:96] duration metric: took 817.991465ms to provisionDockerMachine
	I1209 23:59:54.665677   83859 start.go:293] postStartSetup for "no-preload-048296" (driver="kvm2")
	I1209 23:59:54.665690   83859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:59:54.665712   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.666016   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:59:54.666043   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.668993   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.669501   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.669532   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.669666   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.669859   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.670004   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.670160   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.752122   83859 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:59:54.756546   83859 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:59:54.756569   83859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:59:54.756640   83859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:59:54.756706   83859 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:59:54.756797   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:59:54.766465   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:54.792214   83859 start.go:296] duration metric: took 126.521539ms for postStartSetup
	I1209 23:59:54.792263   83859 fix.go:56] duration metric: took 19.967992145s for fixHost
	I1209 23:59:54.792284   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.794921   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.795304   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.795334   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.795549   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.795807   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.795986   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.796154   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.796333   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:54.796485   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:54.796495   83859 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:59:54.900382   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788794.870799171
	
	I1209 23:59:54.900413   83859 fix.go:216] guest clock: 1733788794.870799171
	I1209 23:59:54.900423   83859 fix.go:229] Guest: 2024-12-09 23:59:54.870799171 +0000 UTC Remote: 2024-12-09 23:59:54.792267927 +0000 UTC m=+357.892420200 (delta=78.531244ms)
	I1209 23:59:54.900443   83859 fix.go:200] guest clock delta is within tolerance: 78.531244ms
	I1209 23:59:54.900448   83859 start.go:83] releasing machines lock for "no-preload-048296", held for 20.076230285s
	I1209 23:59:54.900466   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.900763   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:54.903437   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.903762   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.903785   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.903941   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.904412   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.904583   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.904674   83859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:59:54.904735   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.904788   83859 ssh_runner.go:195] Run: cat /version.json
	I1209 23:59:54.904815   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.907540   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.907693   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.907936   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.907960   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.908083   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.908098   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.908108   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.908269   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.908332   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.908431   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.908578   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.908607   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.908750   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.908753   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.980588   83859 ssh_runner.go:195] Run: systemctl --version
	I1209 23:59:55.003079   83859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:59:55.152159   83859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:59:55.158212   83859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:59:55.158284   83859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:59:55.177510   83859 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:59:55.177539   83859 start.go:495] detecting cgroup driver to use...
	I1209 23:59:55.177616   83859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:59:55.194262   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:59:55.210699   83859 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:59:55.210770   83859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:59:55.226308   83859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:59:55.242175   83859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:59:55.361845   83859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:59:55.500415   83859 docker.go:233] disabling docker service ...
	I1209 23:59:55.500487   83859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:59:55.515689   83859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:59:55.528651   83859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:59:55.663341   83859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:59:55.776773   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:59:55.790155   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:59:55.807749   83859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:59:55.807807   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.817580   83859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:59:55.817644   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.827975   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.837871   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.848118   83859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:59:55.858322   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.867754   83859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.884626   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.894254   83859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:59:55.903128   83859 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:59:55.903187   83859 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:59:55.914887   83859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:59:55.924665   83859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:56.028206   83859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:59:56.117484   83859 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:59:56.117573   83859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:59:56.122345   83859 start.go:563] Will wait 60s for crictl version
	I1209 23:59:56.122401   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.126032   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:59:56.161884   83859 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:59:56.161978   83859 ssh_runner.go:195] Run: crio --version
	I1209 23:59:56.194758   83859 ssh_runner.go:195] Run: crio --version
	I1209 23:59:56.224190   83859 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:59:56.225560   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:56.228550   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:56.228928   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:56.228950   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:56.229163   83859 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1209 23:59:56.233615   83859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:56.245890   83859 kubeadm.go:883] updating cluster {Name:no-preload-048296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-048296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:59:56.246079   83859 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:59:56.246132   83859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:56.285601   83859 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:59:56.285629   83859 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 23:59:56.285694   83859 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:56.285734   83859 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.285761   83859 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.285813   83859 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.285858   83859 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.285900   83859 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.285810   83859 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1209 23:59:56.285983   83859 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.287414   83859 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1209 23:59:56.287436   83859 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.287436   83859 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.287446   83859 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.287465   83859 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.287500   83859 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.287550   83859 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:56.287550   83859 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.437713   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.453590   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.457708   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.468937   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1209 23:59:56.474766   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.477321   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.483791   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.530686   83859 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1209 23:59:56.530735   83859 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.530786   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.603275   83859 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1209 23:59:56.603327   83859 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.603376   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.612591   83859 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1209 23:59:56.612635   83859 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.612686   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726597   83859 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1209 23:59:56.726643   83859 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.726650   83859 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1209 23:59:56.726682   83859 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.726692   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726728   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726752   83859 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1209 23:59:56.726777   83859 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.726808   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726824   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.726882   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.726889   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.795578   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.795624   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.795646   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.795581   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.795723   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.795847   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.905584   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.909282   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.927659   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.927832   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.927928   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.932487   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:53.256494   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:55.257888   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:54.991894   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:57.491500   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:55.645853   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.145844   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.644899   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:57.145431   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:57.645625   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:58.145096   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:58.645933   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:59.145181   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:59.645624   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:00.145874   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.971745   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1209 23:59:56.971844   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1209 23:59:56.987218   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1209 23:59:56.987380   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 23:59:57.050691   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:57.064778   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:57.087868   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:57.087972   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:57.089176   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1209 23:59:57.089235   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1209 23:59:57.089262   83859 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1209 23:59:57.089269   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 23:59:57.089311   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1209 23:59:57.089355   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1209 23:59:57.098939   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1209 23:59:57.099044   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 23:59:57.148482   83859 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1209 23:59:57.148532   83859 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:57.148588   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:57.173723   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1209 23:59:57.173810   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 23:59:57.186640   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1209 23:59:57.186734   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1210 00:00:00.723670   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.634331388s)
	I1210 00:00:00.723704   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1210 00:00:00.723717   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (3.634425647s)
	I1210 00:00:00.723751   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1210 00:00:00.723728   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 00:00:00.723767   83859 ssh_runner.go:235] Completed: which crictl: (3.575163822s)
	I1210 00:00:00.723815   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:00:00.723857   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (3.550033094s)
	I1210 00:00:00.723816   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 00:00:00.723909   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (3.537159613s)
	I1210 00:00:00.723934   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1210 00:00:00.723854   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (3.624679414s)
	I1210 00:00:00.723974   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1210 00:00:00.723880   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1209 23:59:57.756271   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:00.256451   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:02.256956   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:59.491859   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:01.992860   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:00.644886   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:01.145063   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:01.645932   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.145743   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.645396   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:03.145007   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:03.645927   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:04.145876   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:04.645825   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:05.145906   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.713826   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.989917623s)
	I1210 00:00:02.713866   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1210 00:00:02.713883   83859 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.99004494s)
	I1210 00:00:02.713896   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 00:00:02.713949   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:00:02.713952   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 00:00:02.753832   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:00:04.780306   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.066273178s)
	I1210 00:00:04.780344   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1210 00:00:04.780368   83859 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1210 00:00:04.780432   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1210 00:00:04.780430   83859 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.026560705s)
	I1210 00:00:04.780544   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 00:00:04.780618   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:00:06.756731   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.976269529s)
	I1210 00:00:06.756763   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1210 00:00:06.756774   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.976141757s)
	I1210 00:00:06.756793   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1210 00:00:06.756791   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 00:00:06.756846   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 00:00:04.258390   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:06.756422   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:04.490429   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:06.991054   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:08.991247   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:05.644919   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:06.145142   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:06.645240   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:07.144864   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:07.645218   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.145246   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.645965   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:09.145663   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:09.645731   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:10.145638   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.712114   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.955245193s)
	I1210 00:00:08.712142   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1210 00:00:08.712166   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 00:00:08.712212   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 00:00:10.892294   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.180058873s)
	I1210 00:00:10.892319   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1210 00:00:10.892352   83859 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:00:10.892391   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:00:11.542174   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 00:00:11.542214   83859 cache_images.go:123] Successfully loaded all cached images
	I1210 00:00:11.542219   83859 cache_images.go:92] duration metric: took 15.256564286s to LoadCachedImages
	I1210 00:00:11.542230   83859 kubeadm.go:934] updating node { 192.168.61.182 8443 v1.31.2 crio true true} ...
	I1210 00:00:11.542334   83859 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-048296 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-048296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:00:11.542397   83859 ssh_runner.go:195] Run: crio config
	I1210 00:00:11.586768   83859 cni.go:84] Creating CNI manager for ""
	I1210 00:00:11.586787   83859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:00:11.586795   83859 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:00:11.586817   83859 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.182 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-048296 NodeName:no-preload-048296 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:00:11.586932   83859 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-048296"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.182"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.182"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:00:11.586992   83859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:00:11.597936   83859 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:00:11.597996   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 00:00:11.608068   83859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 00:00:11.624881   83859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:00:11.640934   83859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1210 00:00:11.658113   83859 ssh_runner.go:195] Run: grep 192.168.61.182	control-plane.minikube.internal$ /etc/hosts
	I1210 00:00:11.662360   83859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:00:11.674732   83859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:00:11.803743   83859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:00:11.821169   83859 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296 for IP: 192.168.61.182
	I1210 00:00:11.821191   83859 certs.go:194] generating shared ca certs ...
	I1210 00:00:11.821211   83859 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:00:11.821404   83859 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1210 00:00:11.821485   83859 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1210 00:00:11.821502   83859 certs.go:256] generating profile certs ...
	I1210 00:00:11.821583   83859 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/client.key
	I1210 00:00:11.821644   83859 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/apiserver.key.9304569d
	I1210 00:00:11.821677   83859 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/proxy-client.key
	I1210 00:00:11.821783   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1210 00:00:11.821811   83859 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1210 00:00:11.821821   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:00:11.821848   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1210 00:00:11.821881   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:00:11.821917   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1210 00:00:11.821965   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1210 00:00:11.822649   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:00:11.867065   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 00:00:11.898744   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:00:11.932711   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:00:08.758664   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:11.257156   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:11.491011   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:13.491036   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:10.645462   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.145646   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.645921   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:12.145804   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:12.644924   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:13.145055   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:13.645811   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:14.144877   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:14.645709   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.145827   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.966670   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 00:00:11.997827   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:00:12.023344   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:00:12.048872   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:00:12.074332   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:00:12.097886   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1210 00:00:12.121883   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1210 00:00:12.145451   83859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:00:12.167089   83859 ssh_runner.go:195] Run: openssl version
	I1210 00:00:12.172747   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1210 00:00:12.183718   83859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1210 00:00:12.188458   83859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1210 00:00:12.188521   83859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1210 00:00:12.194537   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:00:12.205725   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:00:12.216441   83859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:00:12.221179   83859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:00:12.221237   83859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:00:12.226942   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:00:12.237830   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1210 00:00:12.248188   83859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1210 00:00:12.253689   83859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1210 00:00:12.253747   83859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1210 00:00:12.259326   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1210 00:00:12.269796   83859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:00:12.274316   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 00:00:12.280263   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 00:00:12.286235   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 00:00:12.292330   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 00:00:12.298347   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 00:00:12.304186   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 00:00:12.310189   83859 kubeadm.go:392] StartCluster: {Name:no-preload-048296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-048296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:00:12.310296   83859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:00:12.310349   83859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:00:12.346589   83859 cri.go:89] found id: ""
	I1210 00:00:12.346666   83859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 00:00:12.357674   83859 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 00:00:12.357701   83859 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 00:00:12.357753   83859 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 00:00:12.367817   83859 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 00:00:12.368827   83859 kubeconfig.go:125] found "no-preload-048296" server: "https://192.168.61.182:8443"
	I1210 00:00:12.371117   83859 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 00:00:12.380367   83859 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.182
	I1210 00:00:12.380393   83859 kubeadm.go:1160] stopping kube-system containers ...
	I1210 00:00:12.380404   83859 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 00:00:12.380446   83859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:00:12.413083   83859 cri.go:89] found id: ""
	I1210 00:00:12.413143   83859 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 00:00:12.429819   83859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:00:12.439244   83859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:00:12.439269   83859 kubeadm.go:157] found existing configuration files:
	
	I1210 00:00:12.439336   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:00:12.448182   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:00:12.448257   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:00:12.457759   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:00:12.467372   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:00:12.467449   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:00:12.476877   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:00:12.485831   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:00:12.485898   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:00:12.495537   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:00:12.504333   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:00:12.504398   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:00:12.514572   83859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:00:12.524134   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:12.619683   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.361844   83859 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.742129567s)
	I1210 00:00:14.361876   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.585241   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.653166   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.771204   83859 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:00:14.771306   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.272143   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.771676   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.789350   83859 api_server.go:72] duration metric: took 1.018150373s to wait for apiserver process to appear ...
	I1210 00:00:15.789378   83859 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:00:15.789396   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:15.789878   83859 api_server.go:269] stopped: https://192.168.61.182:8443/healthz: Get "https://192.168.61.182:8443/healthz": dial tcp 192.168.61.182:8443: connect: connection refused
	I1210 00:00:16.289518   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:13.757326   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:15.757843   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:15.491876   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:17.991160   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:18.572189   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 00:00:18.572233   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 00:00:18.572262   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:18.607691   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 00:00:18.607732   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 00:00:18.790007   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:18.794252   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 00:00:18.794281   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 00:00:19.289819   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:19.294079   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 00:00:19.294104   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 00:00:19.789647   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:19.794119   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 200:
	ok
	I1210 00:00:19.800447   83859 api_server.go:141] control plane version: v1.31.2
	I1210 00:00:19.800477   83859 api_server.go:131] duration metric: took 4.011091942s to wait for apiserver health ...
	I1210 00:00:19.800488   83859 cni.go:84] Creating CNI manager for ""
	I1210 00:00:19.800496   83859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:00:19.802341   83859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:00:15.645243   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:16.145027   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:16.645018   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:17.145001   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:17.644893   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:18.145189   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:18.645579   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.144946   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.645832   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:20.145634   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.803715   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:00:19.814324   83859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:00:19.861326   83859 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:00:19.873554   83859 system_pods.go:59] 8 kube-system pods found
	I1210 00:00:19.873588   83859 system_pods.go:61] "coredns-7c65d6cfc9-smnt7" [7d85cb49-3cbb-4133-acab-284ddf0b72f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 00:00:19.873597   83859 system_pods.go:61] "etcd-no-preload-048296" [3eeaede5-b759-49e5-8267-0aaf3c374178] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 00:00:19.873606   83859 system_pods.go:61] "kube-apiserver-no-preload-048296" [8e9d39ce-218a-4fe4-86ea-234d97a5e51d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 00:00:19.873615   83859 system_pods.go:61] "kube-controller-manager-no-preload-048296" [e28ccf7d-13e4-4b55-81d0-2dd348584bca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 00:00:19.873622   83859 system_pods.go:61] "kube-proxy-z479r" [eb6588a6-ac3c-4066-b69e-5613b03ade53] Running
	I1210 00:00:19.873634   83859 system_pods.go:61] "kube-scheduler-no-preload-048296" [0a184373-35d4-4ac6-92fb-65a4faf1c2e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 00:00:19.873646   83859 system_pods.go:61] "metrics-server-6867b74b74-sd58c" [19d64157-7b9b-4a39-8e0c-2c48729fbc93] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:00:19.873653   83859 system_pods.go:61] "storage-provisioner" [30b3ed5d-f843-4090-827f-e1ee6a7ea274] Running
	I1210 00:00:19.873661   83859 system_pods.go:74] duration metric: took 12.308897ms to wait for pod list to return data ...
	I1210 00:00:19.873668   83859 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:00:19.877459   83859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:00:19.877482   83859 node_conditions.go:123] node cpu capacity is 2
	I1210 00:00:19.877496   83859 node_conditions.go:105] duration metric: took 3.822698ms to run NodePressure ...
	I1210 00:00:19.877513   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:20.145838   83859 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 00:00:20.151038   83859 kubeadm.go:739] kubelet initialised
	I1210 00:00:20.151059   83859 kubeadm.go:740] duration metric: took 5.1997ms waiting for restarted kubelet to initialise ...
	I1210 00:00:20.151068   83859 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:00:20.157554   83859 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:20.162413   83859 pod_ready.go:98] node "no-preload-048296" hosting pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.162437   83859 pod_ready.go:82] duration metric: took 4.858113ms for pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace to be "Ready" ...
	E1210 00:00:20.162446   83859 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-048296" hosting pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.162452   83859 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:20.167285   83859 pod_ready.go:98] node "no-preload-048296" hosting pod "etcd-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.167312   83859 pod_ready.go:82] duration metric: took 4.84903ms for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	E1210 00:00:20.167321   83859 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-048296" hosting pod "etcd-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.167328   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:20.172365   83859 pod_ready.go:98] node "no-preload-048296" hosting pod "kube-apiserver-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.172386   83859 pod_ready.go:82] duration metric: took 5.052446ms for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	E1210 00:00:20.172395   83859 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-048296" hosting pod "kube-apiserver-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.172401   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:18.257283   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:20.756752   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:20.490469   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:22.990635   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:20.645094   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:21.145179   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:21.645829   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.145159   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.645390   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:23.145663   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:23.645205   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:24.145625   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:24.645299   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:25.144971   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.180104   83859 pod_ready.go:103] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:24.680524   83859 pod_ready.go:103] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:26.680818   83859 pod_ready.go:103] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:22.757621   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:25.257680   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:24.991719   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:27.490099   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:25.644977   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:26.145967   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:26.645297   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:27.144972   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:27.645030   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.145227   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.645104   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:29.144957   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:29.645516   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:30.145343   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.678941   83859 pod_ready.go:93] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:00:28.678972   83859 pod_ready.go:82] duration metric: took 8.506560777s for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:28.678985   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-z479r" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:28.684896   83859 pod_ready.go:93] pod "kube-proxy-z479r" in "kube-system" namespace has status "Ready":"True"
	I1210 00:00:28.684917   83859 pod_ready.go:82] duration metric: took 5.924915ms for pod "kube-proxy-z479r" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:28.684926   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:30.691918   83859 pod_ready.go:103] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:31.691333   83859 pod_ready.go:93] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:00:31.691362   83859 pod_ready.go:82] duration metric: took 3.006429416s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:31.691372   83859 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:27.756937   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:30.260931   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:29.990322   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:32.489835   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:30.645218   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:31.145393   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:31.645952   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:32.145695   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:32.644910   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.146012   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.645636   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:34.145664   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:34.645813   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:35.145365   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.697948   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:36.197360   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:32.756044   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:34.756585   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:36.756683   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:34.490676   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:36.491117   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:38.991231   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:35.645823   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:36.145410   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:36.645665   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:37.144994   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:37.645701   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.144880   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.645735   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:39.145806   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:39.645695   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:40.145692   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.198422   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:40.697971   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:39.256015   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:41.256607   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:41.490276   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:43.989182   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:40.644935   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:41.145320   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:41.645470   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:42.145714   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:42.644961   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:43.145687   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:43.644990   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:44.144958   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:44.645780   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:44.645870   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:44.680200   84547 cri.go:89] found id: ""
	I1210 00:00:44.680321   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.680334   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:44.680343   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:44.680413   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:44.715278   84547 cri.go:89] found id: ""
	I1210 00:00:44.715305   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.715312   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:44.715318   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:44.715377   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:44.749908   84547 cri.go:89] found id: ""
	I1210 00:00:44.749933   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.749941   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:44.749946   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:44.750008   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:44.786851   84547 cri.go:89] found id: ""
	I1210 00:00:44.786882   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.786893   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:44.786901   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:44.786966   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:44.830060   84547 cri.go:89] found id: ""
	I1210 00:00:44.830106   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.830117   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:44.830125   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:44.830191   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:44.865525   84547 cri.go:89] found id: ""
	I1210 00:00:44.865560   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.865571   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:44.865579   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:44.865643   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:44.902529   84547 cri.go:89] found id: ""
	I1210 00:00:44.902565   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.902575   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:44.902584   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:44.902647   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:44.938876   84547 cri.go:89] found id: ""
	I1210 00:00:44.938904   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.938914   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:44.938925   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:44.938939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:44.992533   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:44.992570   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:45.005990   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:45.006020   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:45.116774   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:45.116797   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:45.116810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:45.187376   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:45.187411   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:42.698555   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:45.198082   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:43.756559   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:45.756755   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:45.990399   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:47.991485   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:47.728964   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:47.742500   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:47.742560   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:47.775751   84547 cri.go:89] found id: ""
	I1210 00:00:47.775783   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.775793   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:47.775799   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:47.775848   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:47.810202   84547 cri.go:89] found id: ""
	I1210 00:00:47.810228   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.810235   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:47.810241   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:47.810302   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:47.844686   84547 cri.go:89] found id: ""
	I1210 00:00:47.844730   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.844739   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:47.844745   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:47.844802   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:47.884076   84547 cri.go:89] found id: ""
	I1210 00:00:47.884108   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.884119   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:47.884127   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:47.884188   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:47.923697   84547 cri.go:89] found id: ""
	I1210 00:00:47.923722   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.923729   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:47.923734   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:47.923791   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:47.955790   84547 cri.go:89] found id: ""
	I1210 00:00:47.955816   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.955824   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:47.955829   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:47.955890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:48.021501   84547 cri.go:89] found id: ""
	I1210 00:00:48.021529   84547 logs.go:282] 0 containers: []
	W1210 00:00:48.021537   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:48.021543   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:48.021592   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:48.054645   84547 cri.go:89] found id: ""
	I1210 00:00:48.054675   84547 logs.go:282] 0 containers: []
	W1210 00:00:48.054688   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:48.054699   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:48.054714   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:48.135706   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:48.135729   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:48.135746   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:48.212397   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:48.212438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:48.254002   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:48.254033   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:48.305858   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:48.305891   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:47.697503   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:49.698076   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:47.758210   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:50.257400   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:50.490507   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:52.989527   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:50.819644   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:50.834153   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:50.834233   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:50.872357   84547 cri.go:89] found id: ""
	I1210 00:00:50.872391   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.872402   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:50.872409   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:50.872471   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:50.909793   84547 cri.go:89] found id: ""
	I1210 00:00:50.909826   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.909839   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:50.909848   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:50.909915   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:50.947167   84547 cri.go:89] found id: ""
	I1210 00:00:50.947206   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.947219   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:50.947228   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:50.947304   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:50.987194   84547 cri.go:89] found id: ""
	I1210 00:00:50.987219   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.987228   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:50.987234   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:50.987287   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:51.027433   84547 cri.go:89] found id: ""
	I1210 00:00:51.027464   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.027476   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:51.027483   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:51.027586   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:51.062156   84547 cri.go:89] found id: ""
	I1210 00:00:51.062186   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.062200   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:51.062208   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:51.062267   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:51.099161   84547 cri.go:89] found id: ""
	I1210 00:00:51.099187   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.099195   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:51.099202   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:51.099249   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:51.134400   84547 cri.go:89] found id: ""
	I1210 00:00:51.134432   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.134445   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:51.134459   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:51.134474   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:51.171842   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:51.171869   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:51.222815   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:51.222854   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:51.236499   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:51.236537   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:51.307835   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:51.307856   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:51.307871   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:53.888771   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:53.902167   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:53.902234   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:53.940724   84547 cri.go:89] found id: ""
	I1210 00:00:53.940748   84547 logs.go:282] 0 containers: []
	W1210 00:00:53.940756   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:53.940767   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:53.940823   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:53.975076   84547 cri.go:89] found id: ""
	I1210 00:00:53.975111   84547 logs.go:282] 0 containers: []
	W1210 00:00:53.975122   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:53.975128   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:53.975191   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:54.011123   84547 cri.go:89] found id: ""
	I1210 00:00:54.011149   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.011157   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:54.011162   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:54.011207   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:54.043594   84547 cri.go:89] found id: ""
	I1210 00:00:54.043620   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.043628   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:54.043633   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:54.043679   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:54.076172   84547 cri.go:89] found id: ""
	I1210 00:00:54.076208   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.076219   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:54.076227   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:54.076292   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:54.110646   84547 cri.go:89] found id: ""
	I1210 00:00:54.110679   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.110691   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:54.110711   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:54.110784   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:54.145901   84547 cri.go:89] found id: ""
	I1210 00:00:54.145929   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.145940   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:54.145947   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:54.146007   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:54.178589   84547 cri.go:89] found id: ""
	I1210 00:00:54.178618   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.178629   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:54.178639   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:54.178652   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:54.231005   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:54.231040   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:54.244583   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:54.244608   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:54.318759   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:54.318787   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:54.318800   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:54.395975   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:54.396012   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:52.198012   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:54.199221   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:56.697929   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:52.756419   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:54.757807   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:57.257555   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:55.490621   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:57.491632   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:56.969699   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:56.985159   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:56.985232   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:57.020768   84547 cri.go:89] found id: ""
	I1210 00:00:57.020798   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.020807   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:57.020812   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:57.020861   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:57.055796   84547 cri.go:89] found id: ""
	I1210 00:00:57.055826   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.055834   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:57.055839   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:57.055897   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:57.089345   84547 cri.go:89] found id: ""
	I1210 00:00:57.089375   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.089385   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:57.089392   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:57.089460   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:57.123969   84547 cri.go:89] found id: ""
	I1210 00:00:57.123993   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.124004   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:57.124012   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:57.124075   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:57.158744   84547 cri.go:89] found id: ""
	I1210 00:00:57.158769   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.158777   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:57.158783   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:57.158842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:57.203933   84547 cri.go:89] found id: ""
	I1210 00:00:57.203954   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.203962   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:57.203968   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:57.204025   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:57.238180   84547 cri.go:89] found id: ""
	I1210 00:00:57.238213   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.238224   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:57.238231   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:57.238287   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:57.273583   84547 cri.go:89] found id: ""
	I1210 00:00:57.273612   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.273623   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:57.273633   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:57.273648   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:57.345992   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:57.346019   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:57.346035   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:57.428335   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:57.428369   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:57.466437   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:57.466472   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:57.517064   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:57.517099   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:00.030599   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:00.045151   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:00.045229   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:00.083050   84547 cri.go:89] found id: ""
	I1210 00:01:00.083074   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.083085   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:00.083093   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:00.083152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:00.119082   84547 cri.go:89] found id: ""
	I1210 00:01:00.119107   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.119120   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:00.119126   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:00.119185   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:00.166888   84547 cri.go:89] found id: ""
	I1210 00:01:00.166921   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.166931   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:00.166939   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:00.166998   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:00.209504   84547 cri.go:89] found id: ""
	I1210 00:01:00.209526   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.209533   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:00.209539   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:00.209595   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:00.247649   84547 cri.go:89] found id: ""
	I1210 00:01:00.247672   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.247680   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:00.247686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:00.247736   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:00.294419   84547 cri.go:89] found id: ""
	I1210 00:01:00.294445   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.294455   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:00.294463   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:00.294526   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:00.328634   84547 cri.go:89] found id: ""
	I1210 00:01:00.328667   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.328677   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:00.328684   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:00.328751   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:00.364667   84547 cri.go:89] found id: ""
	I1210 00:01:00.364696   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.364724   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:00.364733   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:00.364745   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:00.377518   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:00.377552   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:00.449145   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:00.449166   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:00.449178   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:58.698000   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:01.196968   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:59.756367   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:01.756407   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:59.989896   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:01.991121   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:00.529462   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:00.529499   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:00.570471   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:00.570503   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:03.122835   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:03.136329   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:03.136405   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:03.170352   84547 cri.go:89] found id: ""
	I1210 00:01:03.170380   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.170388   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:03.170393   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:03.170454   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:03.204339   84547 cri.go:89] found id: ""
	I1210 00:01:03.204368   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.204379   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:03.204386   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:03.204448   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:03.237516   84547 cri.go:89] found id: ""
	I1210 00:01:03.237559   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.237572   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:03.237579   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:03.237641   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:03.273379   84547 cri.go:89] found id: ""
	I1210 00:01:03.273407   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.273416   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:03.273421   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:03.274017   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:03.308987   84547 cri.go:89] found id: ""
	I1210 00:01:03.309026   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.309038   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:03.309046   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:03.309102   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:03.341913   84547 cri.go:89] found id: ""
	I1210 00:01:03.341943   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.341954   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:03.341961   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:03.342009   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:03.375403   84547 cri.go:89] found id: ""
	I1210 00:01:03.375429   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.375437   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:03.375442   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:03.375494   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:03.408352   84547 cri.go:89] found id: ""
	I1210 00:01:03.408380   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.408387   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:03.408397   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:03.408409   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:03.483789   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:03.483831   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:03.532021   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:03.532050   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:03.585095   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:03.585129   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:03.600062   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:03.600089   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:03.671633   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:03.197410   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:05.698110   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:03.757014   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:06.257259   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:04.490159   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:06.490493   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:08.991447   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:06.172345   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:06.199809   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:06.199897   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:06.240690   84547 cri.go:89] found id: ""
	I1210 00:01:06.240749   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.240760   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:06.240769   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:06.240822   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:06.275743   84547 cri.go:89] found id: ""
	I1210 00:01:06.275770   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.275779   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:06.275786   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:06.275848   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:06.310399   84547 cri.go:89] found id: ""
	I1210 00:01:06.310427   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.310438   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:06.310445   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:06.310507   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:06.345278   84547 cri.go:89] found id: ""
	I1210 00:01:06.345308   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.345320   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:06.345328   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:06.345394   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:06.381926   84547 cri.go:89] found id: ""
	I1210 00:01:06.381961   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.381988   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:06.381994   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:06.382064   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:06.415341   84547 cri.go:89] found id: ""
	I1210 00:01:06.415367   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.415377   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:06.415385   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:06.415438   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:06.449701   84547 cri.go:89] found id: ""
	I1210 00:01:06.449733   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.449743   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:06.449750   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:06.449814   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:06.484271   84547 cri.go:89] found id: ""
	I1210 00:01:06.484298   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.484307   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:06.484317   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:06.484333   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:06.570279   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:06.570313   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:06.607812   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:06.607845   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:06.658206   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:06.658243   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:06.670949   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:06.670978   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:06.745785   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:09.246367   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:09.259390   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:09.259462   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:09.291811   84547 cri.go:89] found id: ""
	I1210 00:01:09.291841   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.291853   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:09.291865   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:09.291922   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:09.326996   84547 cri.go:89] found id: ""
	I1210 00:01:09.327027   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.327038   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:09.327045   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:09.327109   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:09.361026   84547 cri.go:89] found id: ""
	I1210 00:01:09.361062   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.361073   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:09.361081   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:09.361152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:09.397116   84547 cri.go:89] found id: ""
	I1210 00:01:09.397148   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.397159   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:09.397166   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:09.397225   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:09.428989   84547 cri.go:89] found id: ""
	I1210 00:01:09.429025   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.429039   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:09.429046   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:09.429111   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:09.462491   84547 cri.go:89] found id: ""
	I1210 00:01:09.462535   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.462547   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:09.462572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:09.462652   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:09.502705   84547 cri.go:89] found id: ""
	I1210 00:01:09.502728   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.502735   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:09.502740   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:09.502798   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:09.538721   84547 cri.go:89] found id: ""
	I1210 00:01:09.538744   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.538754   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:09.538766   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:09.538779   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:09.590732   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:09.590768   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:09.604928   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:09.604960   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:09.681457   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:09.681484   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:09.681498   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:09.758116   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:09.758149   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:07.698904   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:10.198215   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:08.755738   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:10.756172   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:10.992252   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:13.491283   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:12.307016   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:12.320284   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:12.320371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:12.351984   84547 cri.go:89] found id: ""
	I1210 00:01:12.352007   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.352016   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:12.352024   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:12.352086   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:12.387797   84547 cri.go:89] found id: ""
	I1210 00:01:12.387829   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.387840   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:12.387848   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:12.387924   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:12.417934   84547 cri.go:89] found id: ""
	I1210 00:01:12.417968   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.417979   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:12.417987   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:12.418048   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:12.451771   84547 cri.go:89] found id: ""
	I1210 00:01:12.451802   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.451815   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:12.451822   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:12.451890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:12.485007   84547 cri.go:89] found id: ""
	I1210 00:01:12.485037   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.485048   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:12.485055   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:12.485117   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:12.527851   84547 cri.go:89] found id: ""
	I1210 00:01:12.527879   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.527895   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:12.527905   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:12.527967   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:12.560341   84547 cri.go:89] found id: ""
	I1210 00:01:12.560364   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.560372   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:12.560377   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:12.560428   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:12.593118   84547 cri.go:89] found id: ""
	I1210 00:01:12.593143   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.593150   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:12.593158   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:12.593169   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:12.658694   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:12.658717   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:12.658749   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:12.739742   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:12.739776   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:12.780949   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:12.780977   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:12.829570   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:12.829607   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:15.344239   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:15.358424   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:15.358527   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:15.390712   84547 cri.go:89] found id: ""
	I1210 00:01:15.390744   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.390757   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:15.390764   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:15.390847   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:15.424198   84547 cri.go:89] found id: ""
	I1210 00:01:15.424226   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.424234   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:15.424239   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:15.424306   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:15.458340   84547 cri.go:89] found id: ""
	I1210 00:01:15.458363   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.458370   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:15.458377   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:15.458422   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:15.491468   84547 cri.go:89] found id: ""
	I1210 00:01:15.491497   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.491507   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:15.491520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:15.491597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:12.698224   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:15.197841   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:12.756618   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:14.759492   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:17.256075   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:15.491494   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:17.989410   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:15.528405   84547 cri.go:89] found id: ""
	I1210 00:01:15.528437   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.528448   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:15.528455   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:15.528517   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:15.562966   84547 cri.go:89] found id: ""
	I1210 00:01:15.562995   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.563005   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:15.563012   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:15.563063   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:15.595815   84547 cri.go:89] found id: ""
	I1210 00:01:15.595838   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.595845   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:15.595850   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:15.595907   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:15.631296   84547 cri.go:89] found id: ""
	I1210 00:01:15.631322   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.631333   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:15.631347   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:15.631362   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:15.680177   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:15.680213   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:15.693685   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:15.693724   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:15.760285   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:15.760312   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:15.760326   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:15.837814   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:15.837855   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:18.377112   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:18.390167   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:18.390230   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:18.424095   84547 cri.go:89] found id: ""
	I1210 00:01:18.424127   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.424140   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:18.424150   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:18.424216   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:18.456840   84547 cri.go:89] found id: ""
	I1210 00:01:18.456868   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.456876   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:18.456882   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:18.456938   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:18.492109   84547 cri.go:89] found id: ""
	I1210 00:01:18.492134   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.492145   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:18.492153   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:18.492212   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:18.530445   84547 cri.go:89] found id: ""
	I1210 00:01:18.530472   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.530480   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:18.530486   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:18.530549   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:18.567036   84547 cri.go:89] found id: ""
	I1210 00:01:18.567060   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.567070   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:18.567077   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:18.567136   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:18.606829   84547 cri.go:89] found id: ""
	I1210 00:01:18.606853   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.606863   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:18.606870   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:18.606927   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:18.646032   84547 cri.go:89] found id: ""
	I1210 00:01:18.646061   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.646070   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:18.646075   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:18.646127   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:18.683969   84547 cri.go:89] found id: ""
	I1210 00:01:18.683997   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.684008   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:18.684019   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:18.684036   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:18.735760   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:18.735807   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:18.751689   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:18.751724   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:18.823252   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:18.823272   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:18.823287   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:18.908071   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:18.908110   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:17.698577   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:20.197901   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:19.757398   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:22.255920   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:20.490512   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:22.990168   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:21.445415   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:21.458091   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:21.458152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:21.492615   84547 cri.go:89] found id: ""
	I1210 00:01:21.492646   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.492657   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:21.492664   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:21.492718   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:21.529564   84547 cri.go:89] found id: ""
	I1210 00:01:21.529586   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.529594   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:21.529599   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:21.529669   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:21.566409   84547 cri.go:89] found id: ""
	I1210 00:01:21.566441   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.566450   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:21.566456   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:21.566528   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:21.601783   84547 cri.go:89] found id: ""
	I1210 00:01:21.601815   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.601827   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:21.601835   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:21.601895   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:21.635740   84547 cri.go:89] found id: ""
	I1210 00:01:21.635762   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.635769   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:21.635775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:21.635831   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:21.667777   84547 cri.go:89] found id: ""
	I1210 00:01:21.667806   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.667826   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:21.667834   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:21.667894   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:21.704355   84547 cri.go:89] found id: ""
	I1210 00:01:21.704380   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.704388   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:21.704398   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:21.704457   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:21.737365   84547 cri.go:89] found id: ""
	I1210 00:01:21.737398   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.737410   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:21.737422   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:21.737438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:21.751394   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:21.751434   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:21.823217   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:21.823240   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:21.823255   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:21.910097   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:21.910131   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:21.950087   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:21.950123   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:24.504626   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:24.517920   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:24.517997   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:24.551766   84547 cri.go:89] found id: ""
	I1210 00:01:24.551806   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.551814   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:24.551821   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:24.551880   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:24.588210   84547 cri.go:89] found id: ""
	I1210 00:01:24.588246   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.588256   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:24.588263   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:24.588341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:24.620569   84547 cri.go:89] found id: ""
	I1210 00:01:24.620598   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.620607   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:24.620613   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:24.620673   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:24.657527   84547 cri.go:89] found id: ""
	I1210 00:01:24.657550   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.657558   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:24.657564   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:24.657636   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:24.689382   84547 cri.go:89] found id: ""
	I1210 00:01:24.689410   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.689418   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:24.689423   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:24.689475   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:24.727170   84547 cri.go:89] found id: ""
	I1210 00:01:24.727207   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.727224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:24.727230   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:24.727280   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:24.760733   84547 cri.go:89] found id: ""
	I1210 00:01:24.760759   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.760769   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:24.760775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:24.760842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:24.794750   84547 cri.go:89] found id: ""
	I1210 00:01:24.794782   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.794791   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:24.794799   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:24.794810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:24.847403   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:24.847441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:24.862240   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:24.862273   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:24.936373   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:24.936396   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:24.936409   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:25.011126   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:25.011160   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:22.198527   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:24.697981   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:24.257466   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:26.258086   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:24.990792   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:26.991843   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:27.551134   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:27.564526   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:27.564665   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:27.600652   84547 cri.go:89] found id: ""
	I1210 00:01:27.600677   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.600685   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:27.600690   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:27.600753   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:27.643593   84547 cri.go:89] found id: ""
	I1210 00:01:27.643625   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.643636   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:27.643649   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:27.643705   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:27.680633   84547 cri.go:89] found id: ""
	I1210 00:01:27.680664   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.680676   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:27.680684   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:27.680748   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:27.721790   84547 cri.go:89] found id: ""
	I1210 00:01:27.721824   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.721832   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:27.721837   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:27.721901   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:27.756462   84547 cri.go:89] found id: ""
	I1210 00:01:27.756492   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.756504   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:27.756512   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:27.756574   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:27.792692   84547 cri.go:89] found id: ""
	I1210 00:01:27.792728   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.792838   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:27.792851   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:27.792912   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:27.830065   84547 cri.go:89] found id: ""
	I1210 00:01:27.830098   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.830107   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:27.830116   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:27.830182   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:27.867516   84547 cri.go:89] found id: ""
	I1210 00:01:27.867557   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.867580   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:27.867595   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:27.867611   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:27.917622   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:27.917660   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:27.931687   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:27.931717   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:28.003827   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:28.003860   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:28.003876   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:28.081375   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:28.081410   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:27.197999   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:29.198364   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:31.698061   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:28.755855   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:30.756687   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:29.489992   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:31.490850   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:33.990464   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:30.622885   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:30.635990   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:30.636065   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:30.671418   84547 cri.go:89] found id: ""
	I1210 00:01:30.671453   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.671465   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:30.671475   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:30.671548   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:30.707168   84547 cri.go:89] found id: ""
	I1210 00:01:30.707203   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.707214   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:30.707222   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:30.707275   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:30.741349   84547 cri.go:89] found id: ""
	I1210 00:01:30.741377   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.741386   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:30.741395   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:30.741449   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:30.774952   84547 cri.go:89] found id: ""
	I1210 00:01:30.774985   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.774997   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:30.775005   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:30.775062   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:30.809596   84547 cri.go:89] found id: ""
	I1210 00:01:30.809623   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.809631   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:30.809636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:30.809699   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:30.844256   84547 cri.go:89] found id: ""
	I1210 00:01:30.844288   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.844300   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:30.844308   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:30.844371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:30.876086   84547 cri.go:89] found id: ""
	I1210 00:01:30.876113   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.876124   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:30.876131   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:30.876195   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:30.910858   84547 cri.go:89] found id: ""
	I1210 00:01:30.910884   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.910895   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:30.910905   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:30.910920   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:30.990855   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:30.990876   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:30.990888   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:31.069186   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:31.069221   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:31.109653   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:31.109689   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:31.164962   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:31.165001   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:33.679784   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:33.692222   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:33.692310   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:33.727937   84547 cri.go:89] found id: ""
	I1210 00:01:33.727960   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.727967   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:33.727973   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:33.728022   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:33.762122   84547 cri.go:89] found id: ""
	I1210 00:01:33.762151   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.762162   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:33.762171   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:33.762237   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:33.793855   84547 cri.go:89] found id: ""
	I1210 00:01:33.793883   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.793894   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:33.793903   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:33.793961   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:33.827240   84547 cri.go:89] found id: ""
	I1210 00:01:33.827280   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.827292   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:33.827300   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:33.827366   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:33.863705   84547 cri.go:89] found id: ""
	I1210 00:01:33.863728   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.863738   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:33.863743   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:33.863792   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:33.897189   84547 cri.go:89] found id: ""
	I1210 00:01:33.897213   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.897224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:33.897229   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:33.897282   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:33.930001   84547 cri.go:89] found id: ""
	I1210 00:01:33.930034   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.930044   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:33.930052   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:33.930113   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:33.967330   84547 cri.go:89] found id: ""
	I1210 00:01:33.967359   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.967367   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:33.967378   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:33.967390   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:34.051952   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:34.051996   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:34.087729   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:34.087762   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:34.137879   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:34.137915   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:34.151989   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:34.152025   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:34.225587   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:33.698633   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:36.197023   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:33.257021   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:35.258050   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:36.489900   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:38.490643   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:36.726065   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:36.740513   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:36.740594   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:36.774959   84547 cri.go:89] found id: ""
	I1210 00:01:36.774991   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.775000   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:36.775005   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:36.775070   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:36.808888   84547 cri.go:89] found id: ""
	I1210 00:01:36.808921   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.808934   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:36.808941   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:36.809001   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:36.841695   84547 cri.go:89] found id: ""
	I1210 00:01:36.841727   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.841738   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:36.841748   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:36.841840   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:36.877479   84547 cri.go:89] found id: ""
	I1210 00:01:36.877509   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.877522   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:36.877531   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:36.877600   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:36.909232   84547 cri.go:89] found id: ""
	I1210 00:01:36.909257   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.909265   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:36.909271   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:36.909328   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:36.945858   84547 cri.go:89] found id: ""
	I1210 00:01:36.945892   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.945904   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:36.945912   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:36.945981   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:36.978559   84547 cri.go:89] found id: ""
	I1210 00:01:36.978592   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.978604   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:36.978611   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:36.978674   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:37.015523   84547 cri.go:89] found id: ""
	I1210 00:01:37.015555   84547 logs.go:282] 0 containers: []
	W1210 00:01:37.015587   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:37.015598   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:37.015613   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:37.094825   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:37.094876   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:37.134442   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:37.134470   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:37.184691   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:37.184728   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:37.198801   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:37.198828   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:37.268415   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:39.768790   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:39.783106   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:39.783184   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:39.816629   84547 cri.go:89] found id: ""
	I1210 00:01:39.816655   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.816665   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:39.816672   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:39.816749   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:39.851518   84547 cri.go:89] found id: ""
	I1210 00:01:39.851550   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.851581   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:39.851590   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:39.851648   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:39.887790   84547 cri.go:89] found id: ""
	I1210 00:01:39.887827   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.887842   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:39.887852   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:39.887924   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:39.922230   84547 cri.go:89] found id: ""
	I1210 00:01:39.922255   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.922262   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:39.922268   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:39.922332   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:39.957142   84547 cri.go:89] found id: ""
	I1210 00:01:39.957174   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.957184   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:39.957192   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:39.957254   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:40.004583   84547 cri.go:89] found id: ""
	I1210 00:01:40.004609   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.004618   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:40.004624   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:40.004675   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:40.038278   84547 cri.go:89] found id: ""
	I1210 00:01:40.038300   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.038308   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:40.038313   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:40.038366   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:40.071920   84547 cri.go:89] found id: ""
	I1210 00:01:40.071947   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.071954   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:40.071963   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:40.071973   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:40.142005   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:40.142033   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:40.142049   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:40.219413   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:40.219452   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:40.260786   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:40.260822   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:40.315943   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:40.315988   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:38.198247   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:40.198455   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:37.756243   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:39.756567   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:41.757107   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:40.990316   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:42.990948   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:42.832592   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:42.847532   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:42.847630   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:42.879313   84547 cri.go:89] found id: ""
	I1210 00:01:42.879341   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.879348   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:42.879354   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:42.879403   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:42.914839   84547 cri.go:89] found id: ""
	I1210 00:01:42.914866   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.914877   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:42.914884   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:42.914947   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:42.949514   84547 cri.go:89] found id: ""
	I1210 00:01:42.949553   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.949564   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:42.949572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:42.949632   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:42.987974   84547 cri.go:89] found id: ""
	I1210 00:01:42.988004   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.988021   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:42.988029   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:42.988087   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:43.022835   84547 cri.go:89] found id: ""
	I1210 00:01:43.022860   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.022867   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:43.022873   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:43.022921   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:43.055940   84547 cri.go:89] found id: ""
	I1210 00:01:43.055967   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.055975   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:43.055981   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:43.056030   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:43.088728   84547 cri.go:89] found id: ""
	I1210 00:01:43.088754   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.088762   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:43.088769   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:43.088827   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:43.122809   84547 cri.go:89] found id: ""
	I1210 00:01:43.122842   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.122853   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:43.122865   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:43.122881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:43.172243   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:43.172277   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:43.186566   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:43.186596   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:43.264301   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:43.264327   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:43.264341   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:43.339804   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:43.339848   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:42.698066   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:45.197813   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:44.256069   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:46.256746   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:45.490253   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:47.989687   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:45.881356   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:45.894702   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:45.894779   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:45.928927   84547 cri.go:89] found id: ""
	I1210 00:01:45.928951   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.928958   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:45.928964   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:45.929009   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:45.965271   84547 cri.go:89] found id: ""
	I1210 00:01:45.965303   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.965315   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:45.965323   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:45.965392   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:45.999059   84547 cri.go:89] found id: ""
	I1210 00:01:45.999082   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.999090   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:45.999095   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:45.999140   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:46.034425   84547 cri.go:89] found id: ""
	I1210 00:01:46.034456   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.034468   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:46.034476   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:46.034529   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:46.067946   84547 cri.go:89] found id: ""
	I1210 00:01:46.067970   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.067986   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:46.067993   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:46.068056   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:46.101671   84547 cri.go:89] found id: ""
	I1210 00:01:46.101699   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.101710   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:46.101718   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:46.101783   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:46.135860   84547 cri.go:89] found id: ""
	I1210 00:01:46.135885   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.135893   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:46.135898   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:46.135948   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:46.177398   84547 cri.go:89] found id: ""
	I1210 00:01:46.177439   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.177450   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:46.177461   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:46.177476   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:46.248134   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:46.248160   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:46.248175   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:46.323652   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:46.323688   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:46.364017   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:46.364044   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:46.429480   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:46.429524   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:48.956518   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:48.970209   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:48.970294   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:49.008933   84547 cri.go:89] found id: ""
	I1210 00:01:49.008966   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.008977   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:49.008986   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:49.009050   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:49.044802   84547 cri.go:89] found id: ""
	I1210 00:01:49.044833   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.044843   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:49.044850   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:49.044921   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:49.077484   84547 cri.go:89] found id: ""
	I1210 00:01:49.077517   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.077525   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:49.077530   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:49.077576   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:49.110144   84547 cri.go:89] found id: ""
	I1210 00:01:49.110174   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.110186   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:49.110193   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:49.110255   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:49.142591   84547 cri.go:89] found id: ""
	I1210 00:01:49.142622   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.142633   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:49.142646   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:49.142709   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:49.175456   84547 cri.go:89] found id: ""
	I1210 00:01:49.175486   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.175497   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:49.175505   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:49.175603   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:49.208341   84547 cri.go:89] found id: ""
	I1210 00:01:49.208368   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.208379   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:49.208387   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:49.208445   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:49.241486   84547 cri.go:89] found id: ""
	I1210 00:01:49.241509   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.241518   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:49.241615   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:49.241647   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:49.280023   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:49.280051   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:49.328822   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:49.328858   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:49.343076   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:49.343104   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:49.418010   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:49.418037   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:49.418051   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:47.198917   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:49.697840   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:48.757230   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:51.256927   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:49.990701   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:52.490649   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:52.004053   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:52.017350   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:52.017418   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:52.055617   84547 cri.go:89] found id: ""
	I1210 00:01:52.055641   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.055648   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:52.055654   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:52.055712   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:52.088601   84547 cri.go:89] found id: ""
	I1210 00:01:52.088629   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.088637   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:52.088642   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:52.088694   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:52.121880   84547 cri.go:89] found id: ""
	I1210 00:01:52.121912   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.121922   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:52.121928   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:52.121986   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:52.161281   84547 cri.go:89] found id: ""
	I1210 00:01:52.161321   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.161334   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:52.161341   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:52.161406   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:52.222768   84547 cri.go:89] found id: ""
	I1210 00:01:52.222793   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.222800   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:52.222806   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:52.222862   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:52.271441   84547 cri.go:89] found id: ""
	I1210 00:01:52.271465   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.271473   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:52.271479   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:52.271526   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:52.311128   84547 cri.go:89] found id: ""
	I1210 00:01:52.311152   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.311160   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:52.311165   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:52.311211   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:52.348862   84547 cri.go:89] found id: ""
	I1210 00:01:52.348885   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.348892   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:52.348900   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:52.348913   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:52.401280   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:52.401324   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:52.415532   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:52.415580   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:52.484956   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:52.484979   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:52.484994   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:52.565102   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:52.565137   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:55.106446   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:55.121756   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:55.121846   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:55.158601   84547 cri.go:89] found id: ""
	I1210 00:01:55.158632   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.158643   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:55.158650   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:55.158712   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:55.192424   84547 cri.go:89] found id: ""
	I1210 00:01:55.192454   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.192464   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:55.192471   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:55.192530   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:55.226178   84547 cri.go:89] found id: ""
	I1210 00:01:55.226204   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.226213   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:55.226222   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:55.226285   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:55.264123   84547 cri.go:89] found id: ""
	I1210 00:01:55.264148   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.264161   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:55.264169   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:55.264226   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:55.302476   84547 cri.go:89] found id: ""
	I1210 00:01:55.302503   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.302512   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:55.302520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:55.302597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:55.342308   84547 cri.go:89] found id: ""
	I1210 00:01:55.342341   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.342352   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:55.342360   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:55.342419   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:55.382203   84547 cri.go:89] found id: ""
	I1210 00:01:55.382226   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.382232   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:55.382238   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:55.382286   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:55.421381   84547 cri.go:89] found id: ""
	I1210 00:01:55.421409   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.421421   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:55.421432   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:55.421449   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:55.473758   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:55.473793   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:55.488138   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:55.488166   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:01:52.198005   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:54.697589   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:56.697748   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:53.756310   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:55.756964   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:54.990271   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:57.490303   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	W1210 00:01:55.567216   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:55.567240   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:55.567251   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:55.648276   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:55.648319   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:58.185245   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:58.199015   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:58.199091   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:58.232327   84547 cri.go:89] found id: ""
	I1210 00:01:58.232352   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.232360   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:58.232368   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:58.232436   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:58.270312   84547 cri.go:89] found id: ""
	I1210 00:01:58.270342   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.270353   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:58.270360   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:58.270420   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:58.303389   84547 cri.go:89] found id: ""
	I1210 00:01:58.303415   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.303422   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:58.303427   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:58.303486   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:58.338703   84547 cri.go:89] found id: ""
	I1210 00:01:58.338735   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.338747   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:58.338755   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:58.338817   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:58.376727   84547 cri.go:89] found id: ""
	I1210 00:01:58.376759   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.376770   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:58.376779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:58.376841   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:58.410192   84547 cri.go:89] found id: ""
	I1210 00:01:58.410217   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.410224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:58.410230   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:58.410286   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:58.449772   84547 cri.go:89] found id: ""
	I1210 00:01:58.449794   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.449802   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:58.449807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:58.449859   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:58.484285   84547 cri.go:89] found id: ""
	I1210 00:01:58.484316   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.484328   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:58.484339   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:58.484356   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:58.538402   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:58.538438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:58.551361   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:58.551391   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:58.613809   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:58.613836   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:58.613850   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:58.689606   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:58.689640   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:58.698283   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:01.197599   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:57.758229   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:00.256339   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:59.490413   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:01.491035   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:03.990725   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:01.230924   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:01.244878   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:01.244990   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:01.280475   84547 cri.go:89] found id: ""
	I1210 00:02:01.280501   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.280509   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:01.280514   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:01.280561   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:01.323042   84547 cri.go:89] found id: ""
	I1210 00:02:01.323065   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.323071   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:01.323077   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:01.323124   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:01.356146   84547 cri.go:89] found id: ""
	I1210 00:02:01.356171   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.356181   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:01.356190   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:01.356247   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:01.392542   84547 cri.go:89] found id: ""
	I1210 00:02:01.392567   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.392577   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:01.392584   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:01.392721   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:01.426232   84547 cri.go:89] found id: ""
	I1210 00:02:01.426257   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.426268   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:01.426275   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:01.426341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:01.459754   84547 cri.go:89] found id: ""
	I1210 00:02:01.459786   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.459798   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:01.459806   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:01.459865   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:01.492406   84547 cri.go:89] found id: ""
	I1210 00:02:01.492435   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.492445   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:01.492450   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:01.492499   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:01.532012   84547 cri.go:89] found id: ""
	I1210 00:02:01.532034   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.532042   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:01.532049   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:01.532060   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:01.583145   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:01.583181   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:01.596910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:01.596939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:01.670480   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:01.670506   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:01.670534   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:01.748001   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:01.748041   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:04.291065   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:04.304507   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:04.304587   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:04.341673   84547 cri.go:89] found id: ""
	I1210 00:02:04.341700   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.341713   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:04.341720   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:04.341772   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:04.379743   84547 cri.go:89] found id: ""
	I1210 00:02:04.379776   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.379787   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:04.379795   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:04.379856   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:04.413532   84547 cri.go:89] found id: ""
	I1210 00:02:04.413562   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.413573   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:04.413588   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:04.413648   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:04.449192   84547 cri.go:89] found id: ""
	I1210 00:02:04.449221   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.449231   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:04.449238   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:04.449324   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:04.484638   84547 cri.go:89] found id: ""
	I1210 00:02:04.484666   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.484677   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:04.484686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:04.484745   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:04.524854   84547 cri.go:89] found id: ""
	I1210 00:02:04.524889   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.524903   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:04.524912   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:04.524976   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:04.564697   84547 cri.go:89] found id: ""
	I1210 00:02:04.564726   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.564737   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:04.564748   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:04.564797   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:04.603506   84547 cri.go:89] found id: ""
	I1210 00:02:04.603534   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.603544   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:04.603554   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:04.603583   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:04.653025   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:04.653062   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:04.666833   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:04.666878   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:04.745491   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:04.745513   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:04.745526   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:04.825267   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:04.825304   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:03.702878   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:06.197834   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:02.755541   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:04.757334   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:07.256160   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:06.490491   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:08.491144   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:07.365419   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:07.378968   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:07.379030   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:07.412830   84547 cri.go:89] found id: ""
	I1210 00:02:07.412859   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.412868   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:07.412873   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:07.412938   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:07.445622   84547 cri.go:89] found id: ""
	I1210 00:02:07.445661   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.445674   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:07.445682   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:07.445742   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:07.480431   84547 cri.go:89] found id: ""
	I1210 00:02:07.480466   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.480474   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:07.480480   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:07.480533   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:07.518741   84547 cri.go:89] found id: ""
	I1210 00:02:07.518776   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.518790   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:07.518797   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:07.518860   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:07.552193   84547 cri.go:89] found id: ""
	I1210 00:02:07.552216   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.552223   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:07.552229   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:07.552275   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:07.585762   84547 cri.go:89] found id: ""
	I1210 00:02:07.585784   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.585792   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:07.585798   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:07.585843   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:07.618619   84547 cri.go:89] found id: ""
	I1210 00:02:07.618645   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.618653   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:07.618659   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:07.618709   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:07.653377   84547 cri.go:89] found id: ""
	I1210 00:02:07.653418   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.653428   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:07.653440   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:07.653456   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:07.709366   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:07.709401   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:07.723762   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:07.723792   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:07.804849   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:07.804869   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:07.804886   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:07.887117   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:07.887154   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:10.423120   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:10.436563   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:10.436628   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:10.470622   84547 cri.go:89] found id: ""
	I1210 00:02:10.470650   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.470658   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:10.470664   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:10.470735   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:10.506211   84547 cri.go:89] found id: ""
	I1210 00:02:10.506238   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.506250   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:10.506257   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:10.506368   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:08.198217   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:10.699364   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:09.256492   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:11.256897   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:10.990662   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:13.491635   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:10.541846   84547 cri.go:89] found id: ""
	I1210 00:02:10.541871   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.541879   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:10.541885   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:10.541952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:10.581391   84547 cri.go:89] found id: ""
	I1210 00:02:10.581416   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.581427   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:10.581435   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:10.581503   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:10.615172   84547 cri.go:89] found id: ""
	I1210 00:02:10.615206   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.615216   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:10.615223   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:10.615289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:10.650791   84547 cri.go:89] found id: ""
	I1210 00:02:10.650813   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.650821   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:10.650826   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:10.650876   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:10.685428   84547 cri.go:89] found id: ""
	I1210 00:02:10.685452   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.685460   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:10.685466   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:10.685524   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:10.719139   84547 cri.go:89] found id: ""
	I1210 00:02:10.719174   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.719186   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:10.719196   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:10.719211   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:10.732045   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:10.732073   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:10.805084   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:10.805111   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:10.805127   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:10.888301   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:10.888337   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:10.926005   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:10.926033   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:13.479317   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:13.494021   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:13.494089   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:13.527753   84547 cri.go:89] found id: ""
	I1210 00:02:13.527787   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.527799   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:13.527806   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:13.527862   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:13.560574   84547 cri.go:89] found id: ""
	I1210 00:02:13.560607   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.560618   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:13.560625   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:13.560688   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:13.595507   84547 cri.go:89] found id: ""
	I1210 00:02:13.595551   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.595584   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:13.595592   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:13.595657   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:13.629848   84547 cri.go:89] found id: ""
	I1210 00:02:13.629873   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.629884   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:13.629892   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:13.629952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:13.662407   84547 cri.go:89] found id: ""
	I1210 00:02:13.662436   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.662447   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:13.662454   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:13.662509   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:13.694892   84547 cri.go:89] found id: ""
	I1210 00:02:13.694921   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.694940   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:13.694949   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:13.695013   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:13.733248   84547 cri.go:89] found id: ""
	I1210 00:02:13.733327   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.733349   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:13.733358   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:13.733426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:13.771853   84547 cri.go:89] found id: ""
	I1210 00:02:13.771884   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.771894   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:13.771906   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:13.771920   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:13.846886   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:13.846913   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:13.846928   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:13.929722   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:13.929758   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:13.968401   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:13.968427   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:14.019770   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:14.019811   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:13.197726   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:15.198437   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:13.257271   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:15.755750   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:15.990729   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:18.490851   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:16.532794   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:16.547084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:16.547172   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:16.584046   84547 cri.go:89] found id: ""
	I1210 00:02:16.584073   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.584084   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:16.584091   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:16.584150   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:16.615981   84547 cri.go:89] found id: ""
	I1210 00:02:16.616012   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.616023   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:16.616030   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:16.616094   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:16.647939   84547 cri.go:89] found id: ""
	I1210 00:02:16.647967   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.647979   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:16.647986   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:16.648048   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:16.682585   84547 cri.go:89] found id: ""
	I1210 00:02:16.682620   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.682632   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:16.682640   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:16.682695   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:16.718593   84547 cri.go:89] found id: ""
	I1210 00:02:16.718620   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.718628   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:16.718634   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:16.718687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:16.752513   84547 cri.go:89] found id: ""
	I1210 00:02:16.752536   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.752543   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:16.752549   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:16.752598   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:16.787679   84547 cri.go:89] found id: ""
	I1210 00:02:16.787702   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.787710   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:16.787715   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:16.787777   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:16.821275   84547 cri.go:89] found id: ""
	I1210 00:02:16.821297   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.821305   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:16.821312   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:16.821322   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:16.872500   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:16.872533   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:16.885185   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:16.885212   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:16.962658   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:16.962679   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:16.962694   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:17.039689   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:17.039726   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:19.578060   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:19.590601   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:19.590675   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:19.622145   84547 cri.go:89] found id: ""
	I1210 00:02:19.622170   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.622179   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:19.622184   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:19.622231   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:19.653204   84547 cri.go:89] found id: ""
	I1210 00:02:19.653232   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.653243   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:19.653250   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:19.653317   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:19.685104   84547 cri.go:89] found id: ""
	I1210 00:02:19.685137   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.685148   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:19.685156   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:19.685213   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:19.719087   84547 cri.go:89] found id: ""
	I1210 00:02:19.719113   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.719121   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:19.719126   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:19.719176   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:19.753210   84547 cri.go:89] found id: ""
	I1210 00:02:19.753239   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.753250   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:19.753258   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:19.753317   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:19.787608   84547 cri.go:89] found id: ""
	I1210 00:02:19.787635   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.787645   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:19.787653   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:19.787718   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:19.823111   84547 cri.go:89] found id: ""
	I1210 00:02:19.823142   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.823154   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:19.823161   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:19.823221   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:19.858274   84547 cri.go:89] found id: ""
	I1210 00:02:19.858301   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.858312   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:19.858323   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:19.858344   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:19.905386   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:19.905420   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:19.918995   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:19.919026   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:19.990676   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:19.990700   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:19.990716   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:20.064396   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:20.064435   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:17.698109   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:20.197963   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:17.756633   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:19.757487   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:22.257036   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:20.990682   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:23.490383   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:22.604477   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:22.617408   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:22.617487   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:22.650144   84547 cri.go:89] found id: ""
	I1210 00:02:22.650178   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.650189   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:22.650197   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:22.650293   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:22.684342   84547 cri.go:89] found id: ""
	I1210 00:02:22.684367   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.684375   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:22.684380   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:22.684429   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:22.718168   84547 cri.go:89] found id: ""
	I1210 00:02:22.718194   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.718204   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:22.718211   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:22.718271   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:22.755192   84547 cri.go:89] found id: ""
	I1210 00:02:22.755222   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.755232   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:22.755240   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:22.755297   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:22.793095   84547 cri.go:89] found id: ""
	I1210 00:02:22.793129   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.793141   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:22.793149   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:22.793209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:22.831116   84547 cri.go:89] found id: ""
	I1210 00:02:22.831146   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.831157   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:22.831164   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:22.831235   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:22.869652   84547 cri.go:89] found id: ""
	I1210 00:02:22.869686   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.869704   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:22.869709   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:22.869756   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:22.907480   84547 cri.go:89] found id: ""
	I1210 00:02:22.907504   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.907513   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:22.907520   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:22.907533   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:22.983880   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:22.983902   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:22.983915   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:23.062840   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:23.062880   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:23.101427   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:23.101476   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:23.153861   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:23.153893   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:22.198638   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:24.698708   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:24.757526   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:27.256296   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:25.491835   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:27.990755   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:25.666908   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:25.680047   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:25.680125   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:25.715821   84547 cri.go:89] found id: ""
	I1210 00:02:25.715853   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.715865   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:25.715873   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:25.715931   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:25.748191   84547 cri.go:89] found id: ""
	I1210 00:02:25.748222   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.748232   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:25.748239   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:25.748295   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:25.786474   84547 cri.go:89] found id: ""
	I1210 00:02:25.786498   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.786505   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:25.786511   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:25.786569   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:25.819282   84547 cri.go:89] found id: ""
	I1210 00:02:25.819319   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.819330   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:25.819337   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:25.819400   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:25.858064   84547 cri.go:89] found id: ""
	I1210 00:02:25.858091   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.858100   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:25.858106   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:25.858169   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:25.893335   84547 cri.go:89] found id: ""
	I1210 00:02:25.893362   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.893373   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:25.893380   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:25.893439   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:25.927225   84547 cri.go:89] found id: ""
	I1210 00:02:25.927254   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.927265   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:25.927272   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:25.927341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:25.963678   84547 cri.go:89] found id: ""
	I1210 00:02:25.963715   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.963725   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:25.963738   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:25.963756   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:25.994462   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:25.994488   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:26.061394   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:26.061442   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:26.061458   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:26.135152   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:26.135187   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:26.171961   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:26.171994   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:28.723326   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:28.735721   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:28.735787   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:28.769493   84547 cri.go:89] found id: ""
	I1210 00:02:28.769516   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.769525   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:28.769530   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:28.769582   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:28.805594   84547 cri.go:89] found id: ""
	I1210 00:02:28.805641   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.805652   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:28.805658   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:28.805704   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:28.838982   84547 cri.go:89] found id: ""
	I1210 00:02:28.839007   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.839015   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:28.839020   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:28.839072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:28.876607   84547 cri.go:89] found id: ""
	I1210 00:02:28.876627   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.876635   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:28.876641   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:28.876700   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:28.910352   84547 cri.go:89] found id: ""
	I1210 00:02:28.910379   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.910386   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:28.910392   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:28.910464   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:28.943896   84547 cri.go:89] found id: ""
	I1210 00:02:28.943917   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.943924   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:28.943930   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:28.943978   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:28.976554   84547 cri.go:89] found id: ""
	I1210 00:02:28.976583   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.976590   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:28.976596   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:28.976644   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:29.011334   84547 cri.go:89] found id: ""
	I1210 00:02:29.011357   84547 logs.go:282] 0 containers: []
	W1210 00:02:29.011364   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:29.011372   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:29.011384   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:29.061418   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:29.061464   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:29.074887   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:29.074913   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:29.147240   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:29.147261   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:29.147272   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:29.225058   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:29.225094   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:27.197730   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:29.197818   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:31.198788   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:29.256802   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:31.258194   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:30.490822   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:32.990271   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:31.763432   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:31.777257   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:31.777359   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:31.812558   84547 cri.go:89] found id: ""
	I1210 00:02:31.812588   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.812598   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:31.812606   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:31.812668   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:31.847042   84547 cri.go:89] found id: ""
	I1210 00:02:31.847065   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.847082   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:31.847088   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:31.847135   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:31.881182   84547 cri.go:89] found id: ""
	I1210 00:02:31.881208   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.881216   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:31.881221   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:31.881272   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:31.917367   84547 cri.go:89] found id: ""
	I1210 00:02:31.917393   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.917401   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:31.917407   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:31.917454   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:31.956836   84547 cri.go:89] found id: ""
	I1210 00:02:31.956868   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.956883   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:31.956893   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:31.956952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:31.993125   84547 cri.go:89] found id: ""
	I1210 00:02:31.993151   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.993160   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:31.993168   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:31.993225   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:32.031648   84547 cri.go:89] found id: ""
	I1210 00:02:32.031679   84547 logs.go:282] 0 containers: []
	W1210 00:02:32.031687   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:32.031692   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:32.031746   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:32.065894   84547 cri.go:89] found id: ""
	I1210 00:02:32.065923   84547 logs.go:282] 0 containers: []
	W1210 00:02:32.065930   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:32.065941   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:32.065957   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:32.133473   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:32.133496   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:32.133508   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:32.213129   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:32.213161   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:32.251424   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:32.251453   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:32.302284   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:32.302323   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:34.815963   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:34.829460   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:34.829543   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:34.865811   84547 cri.go:89] found id: ""
	I1210 00:02:34.865837   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.865847   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:34.865854   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:34.865916   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:34.899185   84547 cri.go:89] found id: ""
	I1210 00:02:34.899211   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.899220   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:34.899227   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:34.899289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:34.932472   84547 cri.go:89] found id: ""
	I1210 00:02:34.932500   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.932509   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:34.932517   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:34.932581   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:34.965817   84547 cri.go:89] found id: ""
	I1210 00:02:34.965846   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.965857   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:34.965866   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:34.965930   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:35.000036   84547 cri.go:89] found id: ""
	I1210 00:02:35.000066   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.000077   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:35.000084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:35.000139   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:35.033808   84547 cri.go:89] found id: ""
	I1210 00:02:35.033839   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.033850   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:35.033857   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:35.033916   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:35.066242   84547 cri.go:89] found id: ""
	I1210 00:02:35.066269   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.066278   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:35.066285   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:35.066349   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:35.100740   84547 cri.go:89] found id: ""
	I1210 00:02:35.100763   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.100771   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:35.100779   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:35.100792   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:35.155483   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:35.155520   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:35.168910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:35.168939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:35.243234   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:35.243252   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:35.243263   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:35.320622   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:35.320657   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:33.698543   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:36.198250   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:33.756108   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:35.757682   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:34.990969   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:37.495166   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:37.855684   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:37.869056   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:37.869156   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:37.903720   84547 cri.go:89] found id: ""
	I1210 00:02:37.903748   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.903759   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:37.903766   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:37.903859   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:37.937741   84547 cri.go:89] found id: ""
	I1210 00:02:37.937780   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.937791   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:37.937808   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:37.937869   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:37.975255   84547 cri.go:89] found id: ""
	I1210 00:02:37.975281   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.975292   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:37.975299   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:37.975359   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:38.017348   84547 cri.go:89] found id: ""
	I1210 00:02:38.017381   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.017393   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:38.017400   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:38.017460   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:38.051039   84547 cri.go:89] found id: ""
	I1210 00:02:38.051065   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.051073   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:38.051079   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:38.051129   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:38.085752   84547 cri.go:89] found id: ""
	I1210 00:02:38.085782   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.085791   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:38.085799   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:38.085858   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:38.119975   84547 cri.go:89] found id: ""
	I1210 00:02:38.120004   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.120014   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:38.120021   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:38.120086   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:38.161467   84547 cri.go:89] found id: ""
	I1210 00:02:38.161499   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.161526   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:38.161537   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:38.161551   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:38.222277   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:38.222314   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:38.239300   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:38.239332   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:38.308997   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:38.309016   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:38.309032   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:38.394064   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:38.394108   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:38.198484   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:40.697916   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:38.257723   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:40.756530   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:39.990445   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:41.993952   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:40.933406   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:40.945862   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:40.945937   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:40.979503   84547 cri.go:89] found id: ""
	I1210 00:02:40.979532   84547 logs.go:282] 0 containers: []
	W1210 00:02:40.979540   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:40.979545   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:40.979619   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:41.016758   84547 cri.go:89] found id: ""
	I1210 00:02:41.016792   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.016803   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:41.016811   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:41.016873   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:41.053562   84547 cri.go:89] found id: ""
	I1210 00:02:41.053593   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.053601   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:41.053607   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:41.053667   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:41.086713   84547 cri.go:89] found id: ""
	I1210 00:02:41.086745   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.086757   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:41.086767   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:41.086830   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:41.122901   84547 cri.go:89] found id: ""
	I1210 00:02:41.122935   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.122945   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:41.122952   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:41.123011   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:41.156306   84547 cri.go:89] found id: ""
	I1210 00:02:41.156337   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.156355   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:41.156362   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:41.156423   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:41.189840   84547 cri.go:89] found id: ""
	I1210 00:02:41.189871   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.189882   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:41.189890   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:41.189946   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:41.223019   84547 cri.go:89] found id: ""
	I1210 00:02:41.223051   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.223061   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:41.223072   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:41.223088   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:41.275608   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:41.275640   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:41.289181   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:41.289210   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:41.358375   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:41.358404   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:41.358420   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:41.440214   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:41.440250   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:43.980600   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:43.993110   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:43.993165   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:44.026688   84547 cri.go:89] found id: ""
	I1210 00:02:44.026721   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.026732   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:44.026741   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:44.026796   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:44.062914   84547 cri.go:89] found id: ""
	I1210 00:02:44.062936   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.062943   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:44.062948   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:44.062999   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:44.105974   84547 cri.go:89] found id: ""
	I1210 00:02:44.106001   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.106009   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:44.106014   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:44.106061   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:44.140239   84547 cri.go:89] found id: ""
	I1210 00:02:44.140265   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.140274   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:44.140280   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:44.140338   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:44.175754   84547 cri.go:89] found id: ""
	I1210 00:02:44.175785   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.175796   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:44.175803   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:44.175870   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:44.211663   84547 cri.go:89] found id: ""
	I1210 00:02:44.211694   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.211705   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:44.211712   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:44.211776   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:44.244796   84547 cri.go:89] found id: ""
	I1210 00:02:44.244821   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.244831   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:44.244837   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:44.244898   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:44.282491   84547 cri.go:89] found id: ""
	I1210 00:02:44.282515   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.282528   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:44.282549   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:44.282562   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:44.335284   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:44.335328   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:44.349489   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:44.349530   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:44.418643   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:44.418668   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:44.418682   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:44.493901   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:44.493932   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:43.197494   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:45.198341   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:43.256947   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:45.257225   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:47.258073   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:44.491413   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:46.990521   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:48.991847   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:47.033132   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:47.046322   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:47.046403   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:47.078490   84547 cri.go:89] found id: ""
	I1210 00:02:47.078521   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.078533   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:47.078541   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:47.078602   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:47.111457   84547 cri.go:89] found id: ""
	I1210 00:02:47.111479   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.111487   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:47.111492   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:47.111538   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:47.144643   84547 cri.go:89] found id: ""
	I1210 00:02:47.144678   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.144689   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:47.144696   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:47.144757   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:47.178106   84547 cri.go:89] found id: ""
	I1210 00:02:47.178131   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.178141   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:47.178148   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:47.178213   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:47.215670   84547 cri.go:89] found id: ""
	I1210 00:02:47.215697   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.215712   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:47.215718   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:47.215767   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:47.248916   84547 cri.go:89] found id: ""
	I1210 00:02:47.248941   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.248948   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:47.248953   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:47.249002   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:47.287632   84547 cri.go:89] found id: ""
	I1210 00:02:47.287660   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.287671   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:47.287680   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:47.287745   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:47.327064   84547 cri.go:89] found id: ""
	I1210 00:02:47.327094   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.327103   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:47.327112   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:47.327126   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:47.341132   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:47.341176   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:47.417100   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:47.417121   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:47.417134   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:47.502612   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:47.502648   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:47.541312   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:47.541339   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:50.095403   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:50.108145   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:50.108202   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:50.140424   84547 cri.go:89] found id: ""
	I1210 00:02:50.140451   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.140462   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:50.140472   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:50.140532   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:50.173836   84547 cri.go:89] found id: ""
	I1210 00:02:50.173859   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.173866   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:50.173872   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:50.173928   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:50.213916   84547 cri.go:89] found id: ""
	I1210 00:02:50.213937   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.213944   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:50.213949   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:50.213997   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:50.246854   84547 cri.go:89] found id: ""
	I1210 00:02:50.246889   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.246899   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:50.246907   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:50.246956   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:50.281416   84547 cri.go:89] found id: ""
	I1210 00:02:50.281448   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.281456   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:50.281462   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:50.281511   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:50.313263   84547 cri.go:89] found id: ""
	I1210 00:02:50.313296   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.313308   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:50.313318   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:50.313385   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:50.347421   84547 cri.go:89] found id: ""
	I1210 00:02:50.347453   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.347463   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:50.347470   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:50.347544   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:50.383111   84547 cri.go:89] found id: ""
	I1210 00:02:50.383134   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.383142   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:50.383151   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:50.383162   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:50.421982   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:50.422013   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:50.475478   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:50.475520   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:50.489202   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:50.489256   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:02:47.199043   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:49.698055   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:51.698813   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:49.756585   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:51.757407   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:51.489831   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:53.490572   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	W1210 00:02:50.559501   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:50.559539   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:50.559552   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:53.136042   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:53.149149   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:53.149227   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:53.189323   84547 cri.go:89] found id: ""
	I1210 00:02:53.189349   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.189357   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:53.189365   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:53.189425   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:53.225235   84547 cri.go:89] found id: ""
	I1210 00:02:53.225269   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.225281   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:53.225288   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:53.225347   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:53.263465   84547 cri.go:89] found id: ""
	I1210 00:02:53.263492   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.263502   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:53.263510   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:53.263597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:53.297544   84547 cri.go:89] found id: ""
	I1210 00:02:53.297571   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.297583   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:53.297591   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:53.297656   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:53.331731   84547 cri.go:89] found id: ""
	I1210 00:02:53.331755   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.331762   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:53.331767   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:53.331815   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:53.367395   84547 cri.go:89] found id: ""
	I1210 00:02:53.367427   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.367440   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:53.367447   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:53.367511   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:53.403297   84547 cri.go:89] found id: ""
	I1210 00:02:53.403324   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.403332   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:53.403338   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:53.403398   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:53.437126   84547 cri.go:89] found id: ""
	I1210 00:02:53.437150   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.437158   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:53.437166   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:53.437177   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:53.489875   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:53.489914   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:53.503915   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:53.503940   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:53.578086   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:53.578114   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:53.578127   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:53.658463   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:53.658501   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:53.699332   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:56.197941   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:54.257053   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:56.258352   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:55.990548   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:58.489947   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:56.197093   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:56.211959   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:56.212020   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:56.250462   84547 cri.go:89] found id: ""
	I1210 00:02:56.250484   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.250493   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:56.250498   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:56.250552   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:56.286828   84547 cri.go:89] found id: ""
	I1210 00:02:56.286854   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.286865   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:56.286872   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:56.286939   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:56.320750   84547 cri.go:89] found id: ""
	I1210 00:02:56.320779   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.320787   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:56.320793   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:56.320840   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:56.355903   84547 cri.go:89] found id: ""
	I1210 00:02:56.355943   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.355954   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:56.355960   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:56.356026   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:56.395979   84547 cri.go:89] found id: ""
	I1210 00:02:56.396007   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.396018   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:56.396025   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:56.396081   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:56.431481   84547 cri.go:89] found id: ""
	I1210 00:02:56.431506   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.431514   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:56.431520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:56.431594   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:56.470318   84547 cri.go:89] found id: ""
	I1210 00:02:56.470349   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.470356   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:56.470361   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:56.470410   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:56.506388   84547 cri.go:89] found id: ""
	I1210 00:02:56.506411   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.506418   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:56.506429   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:56.506441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:56.558438   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:56.558483   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:56.571981   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:56.572004   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:56.634665   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:56.634697   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:56.634713   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:56.715663   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:56.715697   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:59.255305   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:59.268846   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:59.268930   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:59.302334   84547 cri.go:89] found id: ""
	I1210 00:02:59.302359   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.302366   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:59.302372   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:59.302426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:59.336380   84547 cri.go:89] found id: ""
	I1210 00:02:59.336409   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.336419   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:59.336425   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:59.336492   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:59.370179   84547 cri.go:89] found id: ""
	I1210 00:02:59.370201   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.370210   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:59.370214   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:59.370268   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:59.403199   84547 cri.go:89] found id: ""
	I1210 00:02:59.403222   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.403229   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:59.403236   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:59.403307   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:59.438650   84547 cri.go:89] found id: ""
	I1210 00:02:59.438673   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.438681   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:59.438686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:59.438736   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:59.473166   84547 cri.go:89] found id: ""
	I1210 00:02:59.473191   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.473199   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:59.473205   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:59.473264   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:59.506857   84547 cri.go:89] found id: ""
	I1210 00:02:59.506879   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.506888   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:59.506902   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:59.506963   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:59.551488   84547 cri.go:89] found id: ""
	I1210 00:02:59.551515   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.551526   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:59.551542   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:59.551557   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:59.605032   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:59.605069   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:59.619238   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:59.619271   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:59.690772   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:59.690798   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:59.690813   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:59.774424   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:59.774460   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:58.198128   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:00.698106   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:58.756596   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:01.257970   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:00.490268   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:02.990951   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:02.315240   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:02.329636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:02.329728   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:02.368571   84547 cri.go:89] found id: ""
	I1210 00:03:02.368599   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.368609   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:02.368621   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:02.368687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:02.406107   84547 cri.go:89] found id: ""
	I1210 00:03:02.406136   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.406148   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:02.406155   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:02.406219   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:02.443050   84547 cri.go:89] found id: ""
	I1210 00:03:02.443077   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.443091   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:02.443098   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:02.443146   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:02.484425   84547 cri.go:89] found id: ""
	I1210 00:03:02.484451   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.484461   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:02.484469   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:02.484536   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:02.525624   84547 cri.go:89] found id: ""
	I1210 00:03:02.525647   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.525655   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:02.525661   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:02.525711   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:02.564808   84547 cri.go:89] found id: ""
	I1210 00:03:02.564839   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.564850   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:02.564856   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:02.564907   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:02.598322   84547 cri.go:89] found id: ""
	I1210 00:03:02.598346   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.598354   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:02.598359   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:02.598417   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:02.632256   84547 cri.go:89] found id: ""
	I1210 00:03:02.632310   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.632322   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:02.632334   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:02.632348   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:02.686025   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:02.686063   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:02.701214   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:02.701237   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:02.773453   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:02.773477   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:02.773491   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:02.862017   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:02.862068   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:05.401014   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:05.415009   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:05.415072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:05.454916   84547 cri.go:89] found id: ""
	I1210 00:03:05.454945   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.454955   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:05.454962   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:05.455023   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:05.492955   84547 cri.go:89] found id: ""
	I1210 00:03:05.492981   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.492988   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:05.492995   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:05.493046   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:02.698652   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.196840   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:03.755858   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.757395   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.489875   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:07.990053   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.539646   84547 cri.go:89] found id: ""
	I1210 00:03:05.539670   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.539683   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:05.539690   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:05.539755   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:05.574534   84547 cri.go:89] found id: ""
	I1210 00:03:05.574559   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.574567   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:05.574572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:05.574632   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:05.608141   84547 cri.go:89] found id: ""
	I1210 00:03:05.608166   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.608174   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:05.608180   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:05.608243   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:05.643725   84547 cri.go:89] found id: ""
	I1210 00:03:05.643751   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.643759   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:05.643765   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:05.643812   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:05.677730   84547 cri.go:89] found id: ""
	I1210 00:03:05.677760   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.677772   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:05.677779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:05.677846   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:05.712475   84547 cri.go:89] found id: ""
	I1210 00:03:05.712507   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.712519   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:05.712531   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:05.712548   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:05.764532   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:05.764569   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:05.777910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:05.777938   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:05.851070   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:05.851099   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:05.851114   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:05.929518   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:05.929554   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:08.465166   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:08.478666   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:08.478723   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:08.513763   84547 cri.go:89] found id: ""
	I1210 00:03:08.513795   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.513808   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:08.513816   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:08.513877   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:08.548272   84547 cri.go:89] found id: ""
	I1210 00:03:08.548299   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.548306   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:08.548313   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:08.548397   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:08.581773   84547 cri.go:89] found id: ""
	I1210 00:03:08.581801   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.581812   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:08.581820   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:08.581884   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:08.613625   84547 cri.go:89] found id: ""
	I1210 00:03:08.613655   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.613664   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:08.613669   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:08.613716   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:08.644618   84547 cri.go:89] found id: ""
	I1210 00:03:08.644644   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.644652   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:08.644658   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:08.644717   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:08.676840   84547 cri.go:89] found id: ""
	I1210 00:03:08.676867   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.676879   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:08.676887   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:08.676952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:08.709918   84547 cri.go:89] found id: ""
	I1210 00:03:08.709952   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.709961   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:08.709969   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:08.710029   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:08.740842   84547 cri.go:89] found id: ""
	I1210 00:03:08.740871   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.740882   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:08.740893   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:08.740907   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:08.808679   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:08.808705   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:08.808721   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:08.884692   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:08.884733   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:08.920922   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:08.920952   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:08.971404   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:08.971441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:07.197548   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:09.197749   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:11.198894   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:08.258477   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:10.755482   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:10.490495   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:12.990709   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:11.484905   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:11.498791   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:11.498865   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:11.535784   84547 cri.go:89] found id: ""
	I1210 00:03:11.535809   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.535820   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:11.535827   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:11.535890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:11.567981   84547 cri.go:89] found id: ""
	I1210 00:03:11.568006   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.568019   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:11.568026   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:11.568083   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:11.601188   84547 cri.go:89] found id: ""
	I1210 00:03:11.601213   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.601224   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:11.601231   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:11.601289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:11.635759   84547 cri.go:89] found id: ""
	I1210 00:03:11.635787   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.635798   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:11.635807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:11.635867   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:11.675512   84547 cri.go:89] found id: ""
	I1210 00:03:11.675537   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.675547   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:11.675554   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:11.675638   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:11.709889   84547 cri.go:89] found id: ""
	I1210 00:03:11.709912   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.709922   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:11.709929   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:11.709987   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:11.744571   84547 cri.go:89] found id: ""
	I1210 00:03:11.744603   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.744610   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:11.744616   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:11.744677   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:11.779091   84547 cri.go:89] found id: ""
	I1210 00:03:11.779123   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.779136   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:11.779146   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:11.779156   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:11.830682   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:11.830726   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:11.844352   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:11.844383   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:11.915025   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:11.915046   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:11.915058   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:11.998581   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:11.998620   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:14.536408   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:14.550250   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:14.550321   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:14.586005   84547 cri.go:89] found id: ""
	I1210 00:03:14.586037   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.586049   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:14.586057   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:14.586118   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:14.620124   84547 cri.go:89] found id: ""
	I1210 00:03:14.620159   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.620170   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:14.620178   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:14.620231   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:14.655079   84547 cri.go:89] found id: ""
	I1210 00:03:14.655104   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.655112   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:14.655117   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:14.655178   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:14.688253   84547 cri.go:89] found id: ""
	I1210 00:03:14.688285   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.688298   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:14.688308   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:14.688371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:14.726904   84547 cri.go:89] found id: ""
	I1210 00:03:14.726932   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.726940   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:14.726945   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:14.726994   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:14.764836   84547 cri.go:89] found id: ""
	I1210 00:03:14.764868   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.764881   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:14.764890   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:14.764955   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:14.803557   84547 cri.go:89] found id: ""
	I1210 00:03:14.803605   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.803616   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:14.803621   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:14.803674   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:14.841070   84547 cri.go:89] found id: ""
	I1210 00:03:14.841102   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.841122   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:14.841137   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:14.841161   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:14.907607   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:14.907631   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:14.907644   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:14.985179   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:14.985209   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:15.022654   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:15.022687   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:15.075224   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:15.075260   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:13.697072   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:15.697892   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:12.757232   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:14.758529   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:17.256541   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:15.490871   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:17.990140   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:17.589836   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:17.604045   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:17.604102   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:17.643211   84547 cri.go:89] found id: ""
	I1210 00:03:17.643241   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.643251   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:17.643260   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:17.643320   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:17.675436   84547 cri.go:89] found id: ""
	I1210 00:03:17.675462   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.675472   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:17.675479   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:17.675538   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:17.709428   84547 cri.go:89] found id: ""
	I1210 00:03:17.709465   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.709476   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:17.709484   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:17.709544   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:17.764088   84547 cri.go:89] found id: ""
	I1210 00:03:17.764121   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.764132   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:17.764139   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:17.764200   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:17.797908   84547 cri.go:89] found id: ""
	I1210 00:03:17.797933   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.797944   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:17.797953   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:17.798016   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:17.829580   84547 cri.go:89] found id: ""
	I1210 00:03:17.829609   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.829620   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:17.829628   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:17.829687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:17.865106   84547 cri.go:89] found id: ""
	I1210 00:03:17.865136   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.865145   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:17.865150   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:17.865200   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:17.896757   84547 cri.go:89] found id: ""
	I1210 00:03:17.896791   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.896803   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:17.896814   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:17.896830   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:17.977210   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:17.977244   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:18.016843   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:18.016867   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:18.065263   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:18.065294   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:18.078188   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:18.078212   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:18.148164   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:17.698675   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:20.199732   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:19.755949   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:21.756959   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:19.990714   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:21.990766   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:23.991443   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:23.991468   83900 pod_ready.go:82] duration metric: took 4m0.007840011s for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	E1210 00:03:23.991477   83900 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1210 00:03:23.991484   83900 pod_ready.go:39] duration metric: took 4m6.196076299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:03:23.991501   83900 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:03:23.991534   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:23.991610   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:24.032198   83900 cri.go:89] found id: "07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:24.032227   83900 cri.go:89] found id: ""
	I1210 00:03:24.032237   83900 logs.go:282] 1 containers: [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7]
	I1210 00:03:24.032303   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.037671   83900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:24.037746   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:24.076467   83900 cri.go:89] found id: "c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:24.076499   83900 cri.go:89] found id: ""
	I1210 00:03:24.076507   83900 logs.go:282] 1 containers: [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9]
	I1210 00:03:24.076557   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.081125   83900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:24.081193   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:24.125433   83900 cri.go:89] found id: "db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:24.125472   83900 cri.go:89] found id: ""
	I1210 00:03:24.125483   83900 logs.go:282] 1 containers: [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71]
	I1210 00:03:24.125542   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.131023   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:24.131097   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:24.173215   83900 cri.go:89] found id: "f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:24.173239   83900 cri.go:89] found id: ""
	I1210 00:03:24.173247   83900 logs.go:282] 1 containers: [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de]
	I1210 00:03:24.173303   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.179027   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:24.179108   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:24.228905   83900 cri.go:89] found id: "a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:24.228936   83900 cri.go:89] found id: ""
	I1210 00:03:24.228946   83900 logs.go:282] 1 containers: [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994]
	I1210 00:03:24.229004   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.234441   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:24.234520   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:20.648921   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:20.661458   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:20.661516   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:20.693473   84547 cri.go:89] found id: ""
	I1210 00:03:20.693509   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.693518   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:20.693524   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:20.693576   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:20.726279   84547 cri.go:89] found id: ""
	I1210 00:03:20.726301   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.726309   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:20.726314   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:20.726375   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:20.759895   84547 cri.go:89] found id: ""
	I1210 00:03:20.759922   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.759931   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:20.759936   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:20.759988   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:20.793934   84547 cri.go:89] found id: ""
	I1210 00:03:20.793964   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.793974   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:20.793982   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:20.794049   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:20.828040   84547 cri.go:89] found id: ""
	I1210 00:03:20.828066   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.828077   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:20.828084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:20.828143   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:20.861917   84547 cri.go:89] found id: ""
	I1210 00:03:20.861950   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.861960   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:20.861967   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:20.862028   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:20.896458   84547 cri.go:89] found id: ""
	I1210 00:03:20.896481   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.896489   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:20.896494   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:20.896551   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:20.931014   84547 cri.go:89] found id: ""
	I1210 00:03:20.931044   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.931052   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:20.931061   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:20.931072   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:20.968693   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:20.968718   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:21.021880   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:21.021917   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:21.035848   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:21.035881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:21.104570   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:21.104604   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:21.104621   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:23.679447   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:23.692326   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:23.692426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:23.729773   84547 cri.go:89] found id: ""
	I1210 00:03:23.729796   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.729804   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:23.729809   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:23.729855   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:23.765875   84547 cri.go:89] found id: ""
	I1210 00:03:23.765905   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.765915   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:23.765922   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:23.765984   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:23.800785   84547 cri.go:89] found id: ""
	I1210 00:03:23.800821   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.800831   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:23.800838   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:23.800902   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:23.838137   84547 cri.go:89] found id: ""
	I1210 00:03:23.838160   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.838168   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:23.838173   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:23.838222   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:23.869916   84547 cri.go:89] found id: ""
	I1210 00:03:23.869947   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.869958   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:23.869966   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:23.870027   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:23.903939   84547 cri.go:89] found id: ""
	I1210 00:03:23.903962   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.903969   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:23.903975   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:23.904021   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:23.937092   84547 cri.go:89] found id: ""
	I1210 00:03:23.937119   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.937127   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:23.937133   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:23.937194   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:23.970562   84547 cri.go:89] found id: ""
	I1210 00:03:23.970599   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.970611   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:23.970622   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:23.970641   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:23.983364   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:23.983394   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:24.066624   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:24.066645   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:24.066657   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:24.151466   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:24.151502   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:24.204590   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:24.204615   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:22.698354   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:24.698462   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:24.256922   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:26.755965   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:24.279840   83900 cri.go:89] found id: "a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:24.279859   83900 cri.go:89] found id: ""
	I1210 00:03:24.279866   83900 logs.go:282] 1 containers: [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538]
	I1210 00:03:24.279922   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.283898   83900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:24.283972   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:24.319541   83900 cri.go:89] found id: ""
	I1210 00:03:24.319598   83900 logs.go:282] 0 containers: []
	W1210 00:03:24.319612   83900 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:24.319624   83900 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:03:24.319686   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:03:24.356205   83900 cri.go:89] found id: "b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:24.356228   83900 cri.go:89] found id: "e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:24.356234   83900 cri.go:89] found id: ""
	I1210 00:03:24.356242   83900 logs.go:282] 2 containers: [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1]
	I1210 00:03:24.356302   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.361772   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.366656   83900 logs.go:123] Gathering logs for kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] ...
	I1210 00:03:24.366690   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:24.408922   83900 logs.go:123] Gathering logs for kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] ...
	I1210 00:03:24.408955   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:24.466720   83900 logs.go:123] Gathering logs for storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] ...
	I1210 00:03:24.466762   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:24.500254   83900 logs.go:123] Gathering logs for storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] ...
	I1210 00:03:24.500290   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:24.535025   83900 logs.go:123] Gathering logs for container status ...
	I1210 00:03:24.535058   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:24.575706   83900 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:24.575732   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:24.645227   83900 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:24.645265   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:24.658833   83900 logs.go:123] Gathering logs for kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] ...
	I1210 00:03:24.658878   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:24.702076   83900 logs.go:123] Gathering logs for kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] ...
	I1210 00:03:24.702107   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:24.738869   83900 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:24.738900   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:25.204715   83900 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:25.204753   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:03:25.320267   83900 logs.go:123] Gathering logs for etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] ...
	I1210 00:03:25.320315   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:25.370599   83900 logs.go:123] Gathering logs for coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] ...
	I1210 00:03:25.370635   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:27.910863   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:27.925070   83900 api_server.go:72] duration metric: took 4m17.856306845s to wait for apiserver process to appear ...
	I1210 00:03:27.925098   83900 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:03:27.925164   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:27.925227   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:27.959991   83900 cri.go:89] found id: "07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:27.960017   83900 cri.go:89] found id: ""
	I1210 00:03:27.960026   83900 logs.go:282] 1 containers: [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7]
	I1210 00:03:27.960081   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:27.964069   83900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:27.964128   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:27.997618   83900 cri.go:89] found id: "c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:27.997640   83900 cri.go:89] found id: ""
	I1210 00:03:27.997650   83900 logs.go:282] 1 containers: [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9]
	I1210 00:03:27.997710   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.001497   83900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:28.001570   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:28.034858   83900 cri.go:89] found id: "db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:28.034883   83900 cri.go:89] found id: ""
	I1210 00:03:28.034891   83900 logs.go:282] 1 containers: [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71]
	I1210 00:03:28.034953   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.038775   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:28.038852   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:28.074473   83900 cri.go:89] found id: "f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:28.074497   83900 cri.go:89] found id: ""
	I1210 00:03:28.074506   83900 logs.go:282] 1 containers: [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de]
	I1210 00:03:28.074568   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.079082   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:28.079149   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:28.113948   83900 cri.go:89] found id: "a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:28.113971   83900 cri.go:89] found id: ""
	I1210 00:03:28.113981   83900 logs.go:282] 1 containers: [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994]
	I1210 00:03:28.114045   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.118930   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:28.118982   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:28.162794   83900 cri.go:89] found id: "a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:28.162820   83900 cri.go:89] found id: ""
	I1210 00:03:28.162827   83900 logs.go:282] 1 containers: [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538]
	I1210 00:03:28.162872   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.166637   83900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:28.166715   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:28.206591   83900 cri.go:89] found id: ""
	I1210 00:03:28.206619   83900 logs.go:282] 0 containers: []
	W1210 00:03:28.206627   83900 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:28.206633   83900 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:03:28.206693   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:03:28.251722   83900 cri.go:89] found id: "b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:28.251748   83900 cri.go:89] found id: "e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:28.251754   83900 cri.go:89] found id: ""
	I1210 00:03:28.251763   83900 logs.go:282] 2 containers: [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1]
	I1210 00:03:28.251821   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.256785   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.260355   83900 logs.go:123] Gathering logs for storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] ...
	I1210 00:03:28.260378   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:28.295801   83900 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:28.295827   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:28.734920   83900 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:28.734963   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:28.804266   83900 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:28.804305   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:28.818223   83900 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:28.818251   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:03:28.931008   83900 logs.go:123] Gathering logs for etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] ...
	I1210 00:03:28.931044   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:28.977544   83900 logs.go:123] Gathering logs for coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] ...
	I1210 00:03:28.977592   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:29.016645   83900 logs.go:123] Gathering logs for kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] ...
	I1210 00:03:29.016679   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:29.052920   83900 logs.go:123] Gathering logs for kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] ...
	I1210 00:03:29.052945   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:29.094464   83900 logs.go:123] Gathering logs for kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] ...
	I1210 00:03:29.094497   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:29.132520   83900 logs.go:123] Gathering logs for kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] ...
	I1210 00:03:29.132545   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:29.180364   83900 logs.go:123] Gathering logs for storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] ...
	I1210 00:03:29.180396   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:29.216627   83900 logs.go:123] Gathering logs for container status ...
	I1210 00:03:29.216657   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:26.774169   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:26.786888   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:26.786964   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:26.820391   84547 cri.go:89] found id: ""
	I1210 00:03:26.820423   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.820433   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:26.820441   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:26.820504   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:26.853927   84547 cri.go:89] found id: ""
	I1210 00:03:26.853955   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.853963   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:26.853971   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:26.854031   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:26.887309   84547 cri.go:89] found id: ""
	I1210 00:03:26.887337   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.887347   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:26.887353   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:26.887415   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:26.923869   84547 cri.go:89] found id: ""
	I1210 00:03:26.923897   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.923908   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:26.923915   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:26.923983   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:26.959919   84547 cri.go:89] found id: ""
	I1210 00:03:26.959952   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.959964   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:26.959971   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:26.960032   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:26.993742   84547 cri.go:89] found id: ""
	I1210 00:03:26.993775   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.993787   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:26.993794   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:26.993853   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:27.026977   84547 cri.go:89] found id: ""
	I1210 00:03:27.027011   84547 logs.go:282] 0 containers: []
	W1210 00:03:27.027021   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:27.027028   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:27.027090   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:27.064135   84547 cri.go:89] found id: ""
	I1210 00:03:27.064164   84547 logs.go:282] 0 containers: []
	W1210 00:03:27.064173   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:27.064181   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:27.064192   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:27.101758   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:27.101784   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:27.155536   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:27.155592   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:27.168891   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:27.168914   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:27.242486   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:27.242511   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:27.242525   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:29.821740   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:29.834536   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:29.834604   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:29.868316   84547 cri.go:89] found id: ""
	I1210 00:03:29.868341   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.868348   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:29.868354   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:29.868402   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:29.902290   84547 cri.go:89] found id: ""
	I1210 00:03:29.902320   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.902331   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:29.902338   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:29.902399   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:29.948750   84547 cri.go:89] found id: ""
	I1210 00:03:29.948780   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.948792   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:29.948800   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:29.948864   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:29.994587   84547 cri.go:89] found id: ""
	I1210 00:03:29.994618   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.994629   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:29.994636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:29.994694   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:30.042216   84547 cri.go:89] found id: ""
	I1210 00:03:30.042243   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.042256   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:30.042264   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:30.042345   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:30.075964   84547 cri.go:89] found id: ""
	I1210 00:03:30.075989   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.075999   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:30.076007   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:30.076072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:30.108647   84547 cri.go:89] found id: ""
	I1210 00:03:30.108676   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.108687   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:30.108695   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:30.108760   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:30.140983   84547 cri.go:89] found id: ""
	I1210 00:03:30.141013   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.141022   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:30.141030   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:30.141040   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:30.198281   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:30.198319   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:30.213820   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:30.213849   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:30.283907   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:30.283927   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:30.283940   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:30.360731   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:30.360768   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:27.197512   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:29.698238   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:31.698679   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:28.757444   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:31.255481   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:31.759061   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1210 00:03:31.763064   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 200:
	ok
	I1210 00:03:31.763974   83900 api_server.go:141] control plane version: v1.31.2
	I1210 00:03:31.763992   83900 api_server.go:131] duration metric: took 3.83888731s to wait for apiserver health ...
	I1210 00:03:31.763999   83900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:03:31.764021   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:31.764077   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:31.798282   83900 cri.go:89] found id: "07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:31.798312   83900 cri.go:89] found id: ""
	I1210 00:03:31.798320   83900 logs.go:282] 1 containers: [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7]
	I1210 00:03:31.798375   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.802661   83900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:31.802726   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:31.837861   83900 cri.go:89] found id: "c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:31.837888   83900 cri.go:89] found id: ""
	I1210 00:03:31.837896   83900 logs.go:282] 1 containers: [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9]
	I1210 00:03:31.837943   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.842139   83900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:31.842224   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:31.876940   83900 cri.go:89] found id: "db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:31.876967   83900 cri.go:89] found id: ""
	I1210 00:03:31.876977   83900 logs.go:282] 1 containers: [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71]
	I1210 00:03:31.877043   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.881416   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:31.881491   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:31.915449   83900 cri.go:89] found id: "f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:31.915481   83900 cri.go:89] found id: ""
	I1210 00:03:31.915491   83900 logs.go:282] 1 containers: [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de]
	I1210 00:03:31.915575   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.919342   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:31.919400   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:31.954554   83900 cri.go:89] found id: "a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:31.954583   83900 cri.go:89] found id: ""
	I1210 00:03:31.954592   83900 logs.go:282] 1 containers: [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994]
	I1210 00:03:31.954641   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.958484   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:31.958569   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:32.000361   83900 cri.go:89] found id: "a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:32.000390   83900 cri.go:89] found id: ""
	I1210 00:03:32.000401   83900 logs.go:282] 1 containers: [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538]
	I1210 00:03:32.000462   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:32.004339   83900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:32.004400   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:32.043231   83900 cri.go:89] found id: ""
	I1210 00:03:32.043254   83900 logs.go:282] 0 containers: []
	W1210 00:03:32.043279   83900 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:32.043285   83900 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:03:32.043334   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:03:32.079572   83900 cri.go:89] found id: "b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:32.079597   83900 cri.go:89] found id: "e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:32.079608   83900 cri.go:89] found id: ""
	I1210 00:03:32.079615   83900 logs.go:282] 2 containers: [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1]
	I1210 00:03:32.079661   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:32.083526   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:32.087101   83900 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:32.087126   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:32.494977   83900 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:32.495021   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:32.509299   83900 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:32.509342   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:03:32.612029   83900 logs.go:123] Gathering logs for kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] ...
	I1210 00:03:32.612066   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:32.656868   83900 logs.go:123] Gathering logs for kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] ...
	I1210 00:03:32.656900   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:32.695347   83900 logs.go:123] Gathering logs for storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] ...
	I1210 00:03:32.695376   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:32.732071   83900 logs.go:123] Gathering logs for storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] ...
	I1210 00:03:32.732100   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:32.767692   83900 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:32.767718   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:32.836047   83900 logs.go:123] Gathering logs for etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] ...
	I1210 00:03:32.836088   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:32.887136   83900 logs.go:123] Gathering logs for coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] ...
	I1210 00:03:32.887175   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:32.929836   83900 logs.go:123] Gathering logs for kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] ...
	I1210 00:03:32.929873   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:32.972459   83900 logs.go:123] Gathering logs for kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] ...
	I1210 00:03:32.972492   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:33.029387   83900 logs.go:123] Gathering logs for container status ...
	I1210 00:03:33.029415   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:32.904302   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:32.919123   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:32.919209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:32.952847   84547 cri.go:89] found id: ""
	I1210 00:03:32.952879   84547 logs.go:282] 0 containers: []
	W1210 00:03:32.952889   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:32.952897   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:32.952961   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:32.986993   84547 cri.go:89] found id: ""
	I1210 00:03:32.987020   84547 logs.go:282] 0 containers: []
	W1210 00:03:32.987029   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:32.987035   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:32.987085   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:33.027506   84547 cri.go:89] found id: ""
	I1210 00:03:33.027536   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.027548   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:33.027556   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:33.027630   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:33.061553   84547 cri.go:89] found id: ""
	I1210 00:03:33.061588   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.061605   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:33.061613   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:33.061673   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:33.097640   84547 cri.go:89] found id: ""
	I1210 00:03:33.097679   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.097693   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:33.097702   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:33.097783   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:33.131725   84547 cri.go:89] found id: ""
	I1210 00:03:33.131758   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.131768   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:33.131775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:33.131839   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:33.165731   84547 cri.go:89] found id: ""
	I1210 00:03:33.165759   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.165771   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:33.165779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:33.165841   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:33.200288   84547 cri.go:89] found id: ""
	I1210 00:03:33.200310   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.200320   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:33.200338   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:33.200351   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:33.251524   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:33.251577   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:33.266197   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:33.266223   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:33.343529   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:33.343583   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:33.343600   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:33.420133   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:33.420175   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:35.577482   83900 system_pods.go:59] 8 kube-system pods found
	I1210 00:03:35.577511   83900 system_pods.go:61] "coredns-7c65d6cfc9-qvtlr" [1ef5707d-5f06-46d0-809b-da79f79c49e5] Running
	I1210 00:03:35.577516   83900 system_pods.go:61] "etcd-embed-certs-825613" [3213929f-0200-4e7d-9b69-967463bfc194] Running
	I1210 00:03:35.577520   83900 system_pods.go:61] "kube-apiserver-embed-certs-825613" [d64b8c55-7da1-4b8e-864e-0727676b17a1] Running
	I1210 00:03:35.577524   83900 system_pods.go:61] "kube-controller-manager-embed-certs-825613" [61372146-3fe1-4f4e-9d8e-d85cc5ac8369] Running
	I1210 00:03:35.577527   83900 system_pods.go:61] "kube-proxy-rn6fg" [6db02558-bfa6-4c5f-a120-aed13575b273] Running
	I1210 00:03:35.577529   83900 system_pods.go:61] "kube-scheduler-embed-certs-825613" [b9c75ecd-add3-4a19-86e0-08326eea0f6b] Running
	I1210 00:03:35.577537   83900 system_pods.go:61] "metrics-server-6867b74b74-hg7c5" [2a657b1b-4435-42b5-aef2-deebf7865c83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:03:35.577542   83900 system_pods.go:61] "storage-provisioner" [e5cabae9-bb71-4b5e-9a43-0dce4a0733a1] Running
	I1210 00:03:35.577549   83900 system_pods.go:74] duration metric: took 3.81354476s to wait for pod list to return data ...
	I1210 00:03:35.577556   83900 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:03:35.579951   83900 default_sa.go:45] found service account: "default"
	I1210 00:03:35.579971   83900 default_sa.go:55] duration metric: took 2.410307ms for default service account to be created ...
	I1210 00:03:35.579977   83900 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:03:35.584102   83900 system_pods.go:86] 8 kube-system pods found
	I1210 00:03:35.584126   83900 system_pods.go:89] "coredns-7c65d6cfc9-qvtlr" [1ef5707d-5f06-46d0-809b-da79f79c49e5] Running
	I1210 00:03:35.584131   83900 system_pods.go:89] "etcd-embed-certs-825613" [3213929f-0200-4e7d-9b69-967463bfc194] Running
	I1210 00:03:35.584136   83900 system_pods.go:89] "kube-apiserver-embed-certs-825613" [d64b8c55-7da1-4b8e-864e-0727676b17a1] Running
	I1210 00:03:35.584142   83900 system_pods.go:89] "kube-controller-manager-embed-certs-825613" [61372146-3fe1-4f4e-9d8e-d85cc5ac8369] Running
	I1210 00:03:35.584148   83900 system_pods.go:89] "kube-proxy-rn6fg" [6db02558-bfa6-4c5f-a120-aed13575b273] Running
	I1210 00:03:35.584153   83900 system_pods.go:89] "kube-scheduler-embed-certs-825613" [b9c75ecd-add3-4a19-86e0-08326eea0f6b] Running
	I1210 00:03:35.584163   83900 system_pods.go:89] "metrics-server-6867b74b74-hg7c5" [2a657b1b-4435-42b5-aef2-deebf7865c83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:03:35.584168   83900 system_pods.go:89] "storage-provisioner" [e5cabae9-bb71-4b5e-9a43-0dce4a0733a1] Running
	I1210 00:03:35.584181   83900 system_pods.go:126] duration metric: took 4.196356ms to wait for k8s-apps to be running ...
	I1210 00:03:35.584192   83900 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:03:35.584235   83900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:03:35.598192   83900 system_svc.go:56] duration metric: took 13.993491ms WaitForService to wait for kubelet
	I1210 00:03:35.598218   83900 kubeadm.go:582] duration metric: took 4m25.529459505s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:03:35.598238   83900 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:03:35.600992   83900 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:03:35.601012   83900 node_conditions.go:123] node cpu capacity is 2
	I1210 00:03:35.601025   83900 node_conditions.go:105] duration metric: took 2.782415ms to run NodePressure ...
	I1210 00:03:35.601038   83900 start.go:241] waiting for startup goroutines ...
	I1210 00:03:35.601049   83900 start.go:246] waiting for cluster config update ...
	I1210 00:03:35.601063   83900 start.go:255] writing updated cluster config ...
	I1210 00:03:35.601360   83900 ssh_runner.go:195] Run: rm -f paused
	I1210 00:03:35.647487   83900 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:03:35.650508   83900 out.go:177] * Done! kubectl is now configured to use "embed-certs-825613" cluster and "default" namespace by default
	I1210 00:03:34.199682   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:36.696731   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:33.258255   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:35.756802   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:35.971411   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:35.988993   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:35.989059   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:36.022639   84547 cri.go:89] found id: ""
	I1210 00:03:36.022673   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.022684   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:36.022692   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:36.022758   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:36.055421   84547 cri.go:89] found id: ""
	I1210 00:03:36.055452   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.055461   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:36.055466   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:36.055514   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:36.087690   84547 cri.go:89] found id: ""
	I1210 00:03:36.087721   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.087731   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:36.087738   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:36.087802   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:36.125209   84547 cri.go:89] found id: ""
	I1210 00:03:36.125240   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.125249   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:36.125254   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:36.125304   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:36.157544   84547 cri.go:89] found id: ""
	I1210 00:03:36.157586   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.157599   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:36.157607   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:36.157676   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:36.193415   84547 cri.go:89] found id: ""
	I1210 00:03:36.193448   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.193459   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:36.193466   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:36.193525   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:36.228941   84547 cri.go:89] found id: ""
	I1210 00:03:36.228971   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.228982   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:36.228989   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:36.229052   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:36.264821   84547 cri.go:89] found id: ""
	I1210 00:03:36.264851   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.264862   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:36.264873   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:36.264887   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:36.314841   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:36.314881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:36.328664   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:36.328695   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:36.402929   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:36.402956   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:36.402970   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:36.480270   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:36.480306   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:39.022748   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:39.036323   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:39.036382   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:39.070582   84547 cri.go:89] found id: ""
	I1210 00:03:39.070608   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.070616   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:39.070622   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:39.070690   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:39.108783   84547 cri.go:89] found id: ""
	I1210 00:03:39.108814   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.108825   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:39.108832   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:39.108889   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:39.142300   84547 cri.go:89] found id: ""
	I1210 00:03:39.142330   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.142338   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:39.142344   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:39.142400   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:39.177859   84547 cri.go:89] found id: ""
	I1210 00:03:39.177891   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.177903   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:39.177911   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:39.177965   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:39.212109   84547 cri.go:89] found id: ""
	I1210 00:03:39.212140   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.212152   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:39.212160   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:39.212209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:39.245451   84547 cri.go:89] found id: ""
	I1210 00:03:39.245490   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.245501   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:39.245509   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:39.245569   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:39.278925   84547 cri.go:89] found id: ""
	I1210 00:03:39.278957   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.278967   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:39.278974   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:39.279063   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:39.312322   84547 cri.go:89] found id: ""
	I1210 00:03:39.312350   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.312358   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:39.312367   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:39.312377   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:39.362844   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:39.362882   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:39.375938   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:39.375967   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:39.443649   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:39.443676   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:39.443691   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:39.523722   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:39.523757   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:38.698105   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:41.198814   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:38.256567   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:40.257434   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:42.062317   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:42.075094   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:42.075157   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:42.106719   84547 cri.go:89] found id: ""
	I1210 00:03:42.106744   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.106751   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:42.106757   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:42.106805   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:42.141977   84547 cri.go:89] found id: ""
	I1210 00:03:42.142011   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.142022   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:42.142029   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:42.142089   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:42.175117   84547 cri.go:89] found id: ""
	I1210 00:03:42.175151   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.175164   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:42.175172   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:42.175243   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:42.209871   84547 cri.go:89] found id: ""
	I1210 00:03:42.209900   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.209911   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:42.209919   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:42.209982   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:42.252784   84547 cri.go:89] found id: ""
	I1210 00:03:42.252812   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.252822   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:42.252830   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:42.252891   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:42.285934   84547 cri.go:89] found id: ""
	I1210 00:03:42.285961   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.285973   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:42.285980   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:42.286040   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:42.327393   84547 cri.go:89] found id: ""
	I1210 00:03:42.327423   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.327433   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:42.327440   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:42.327498   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:42.362244   84547 cri.go:89] found id: ""
	I1210 00:03:42.362279   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.362289   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:42.362302   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:42.362317   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:42.441656   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:42.441691   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:42.479357   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:42.479397   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:42.533002   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:42.533038   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:42.546485   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:42.546514   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:42.616912   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:45.117156   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:45.131265   84547 kubeadm.go:597] duration metric: took 4m2.157458218s to restartPrimaryControlPlane
	W1210 00:03:45.131350   84547 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 00:03:45.131379   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:03:43.699026   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:46.197420   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:42.756686   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:44.251077   84259 pod_ready.go:82] duration metric: took 4m0.000996899s for pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace to be "Ready" ...
	E1210 00:03:44.251112   84259 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 00:03:44.251136   84259 pod_ready.go:39] duration metric: took 4m13.15350289s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:03:44.251170   84259 kubeadm.go:597] duration metric: took 4m23.227405415s to restartPrimaryControlPlane
	W1210 00:03:44.251238   84259 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 00:03:44.251368   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:03:46.778025   84547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.646620044s)
	I1210 00:03:46.778099   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:03:46.792384   84547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:03:46.802897   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:03:46.813393   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:03:46.813417   84547 kubeadm.go:157] found existing configuration files:
	
	I1210 00:03:46.813473   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:03:46.823016   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:03:46.823093   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:03:46.832912   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:03:46.841961   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:03:46.842019   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:03:46.851160   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:03:46.859879   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:03:46.859938   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:03:46.869192   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:03:46.878423   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:03:46.878487   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:03:46.888103   84547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:03:47.105146   84547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:03:48.698211   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:51.197463   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:53.198578   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:55.697251   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:57.698289   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:00.198291   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:02.696926   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:04.697431   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:06.698260   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:10.270952   84259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.019545181s)
	I1210 00:04:10.271025   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:04:10.292112   84259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:04:10.307538   84259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:04:10.325375   84259 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:04:10.325406   84259 kubeadm.go:157] found existing configuration files:
	
	I1210 00:04:10.325465   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 00:04:10.338892   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:04:10.338960   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:04:10.353888   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 00:04:10.364787   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:04:10.364852   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:04:10.374513   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 00:04:10.393430   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:04:10.393486   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:04:10.403621   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 00:04:10.412759   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:04:10.412824   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:04:10.422153   84259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:04:10.468789   84259 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 00:04:10.468852   84259 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:04:10.566712   84259 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:04:10.566840   84259 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:04:10.566939   84259 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 00:04:10.574253   84259 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:04:10.576376   84259 out.go:235]   - Generating certificates and keys ...
	I1210 00:04:10.576485   84259 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:04:10.576566   84259 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:04:10.576722   84259 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:04:10.576816   84259 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:04:10.576915   84259 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:04:10.576991   84259 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:04:10.577107   84259 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:04:10.577214   84259 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:04:10.577319   84259 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:04:10.577433   84259 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:04:10.577522   84259 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:04:10.577616   84259 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:04:10.870887   84259 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:04:10.977055   84259 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 00:04:11.160320   84259 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:04:11.207864   84259 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:04:11.315037   84259 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:04:11.315715   84259 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:04:11.319135   84259 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:04:09.197254   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:11.197831   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:11.321063   84259 out.go:235]   - Booting up control plane ...
	I1210 00:04:11.321193   84259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:04:11.321296   84259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:04:11.322134   84259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:04:11.341567   84259 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:04:11.347658   84259 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:04:11.347736   84259 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:04:11.492127   84259 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 00:04:11.492293   84259 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 00:04:11.994410   84259 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.01471ms
	I1210 00:04:11.994533   84259 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 00:04:13.697731   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:16.198771   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:16.998638   84259 kubeadm.go:310] [api-check] The API server is healthy after 5.002934845s
	I1210 00:04:17.009930   84259 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 00:04:17.025483   84259 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 00:04:17.059445   84259 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 00:04:17.059762   84259 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-871210 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 00:04:17.071665   84259 kubeadm.go:310] [bootstrap-token] Using token: 1x1r34.gs3p33sqgju9dylj
	I1210 00:04:17.073936   84259 out.go:235]   - Configuring RBAC rules ...
	I1210 00:04:17.074055   84259 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 00:04:17.079920   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 00:04:17.090408   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 00:04:17.094072   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 00:04:17.096951   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 00:04:17.099929   84259 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 00:04:17.404431   84259 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 00:04:17.837125   84259 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 00:04:18.404721   84259 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 00:04:18.405661   84259 kubeadm.go:310] 
	I1210 00:04:18.405757   84259 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 00:04:18.405770   84259 kubeadm.go:310] 
	I1210 00:04:18.405871   84259 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 00:04:18.405883   84259 kubeadm.go:310] 
	I1210 00:04:18.405916   84259 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 00:04:18.406012   84259 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 00:04:18.406101   84259 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 00:04:18.406111   84259 kubeadm.go:310] 
	I1210 00:04:18.406197   84259 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 00:04:18.406209   84259 kubeadm.go:310] 
	I1210 00:04:18.406318   84259 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 00:04:18.406329   84259 kubeadm.go:310] 
	I1210 00:04:18.406412   84259 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 00:04:18.406482   84259 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 00:04:18.406544   84259 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 00:04:18.406550   84259 kubeadm.go:310] 
	I1210 00:04:18.406643   84259 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 00:04:18.406787   84259 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 00:04:18.406802   84259 kubeadm.go:310] 
	I1210 00:04:18.406919   84259 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 1x1r34.gs3p33sqgju9dylj \
	I1210 00:04:18.407089   84259 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b \
	I1210 00:04:18.407117   84259 kubeadm.go:310] 	--control-plane 
	I1210 00:04:18.407123   84259 kubeadm.go:310] 
	I1210 00:04:18.407207   84259 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 00:04:18.407214   84259 kubeadm.go:310] 
	I1210 00:04:18.407300   84259 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 1x1r34.gs3p33sqgju9dylj \
	I1210 00:04:18.407433   84259 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b 
	I1210 00:04:18.408197   84259 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:04:18.408272   84259 cni.go:84] Creating CNI manager for ""
	I1210 00:04:18.408289   84259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:04:18.410841   84259 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:04:18.697285   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:20.698266   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:18.412209   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:04:18.422123   84259 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:04:18.440972   84259 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:04:18.441058   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:18.441138   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-871210 minikube.k8s.io/updated_at=2024_12_10T00_04_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=default-k8s-diff-port-871210 minikube.k8s.io/primary=true
	I1210 00:04:18.462185   84259 ops.go:34] apiserver oom_adj: -16
	I1210 00:04:18.625855   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:19.126760   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:19.626829   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:20.126663   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:20.626893   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:21.126851   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:21.625892   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:22.126904   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:22.626450   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:22.715909   84259 kubeadm.go:1113] duration metric: took 4.274924102s to wait for elevateKubeSystemPrivileges
	I1210 00:04:22.715958   84259 kubeadm.go:394] duration metric: took 5m1.740971462s to StartCluster
	I1210 00:04:22.715982   84259 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:04:22.716080   84259 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1210 00:04:22.718538   84259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:04:22.718890   84259 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:04:22.718958   84259 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:04:22.719059   84259 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-871210"
	I1210 00:04:22.719079   84259 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-871210"
	W1210 00:04:22.719091   84259 addons.go:243] addon storage-provisioner should already be in state true
	I1210 00:04:22.719100   84259 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-871210"
	I1210 00:04:22.719123   84259 host.go:66] Checking if "default-k8s-diff-port-871210" exists ...
	I1210 00:04:22.719120   84259 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-871210"
	I1210 00:04:22.719169   84259 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-871210"
	W1210 00:04:22.719184   84259 addons.go:243] addon metrics-server should already be in state true
	I1210 00:04:22.719216   84259 host.go:66] Checking if "default-k8s-diff-port-871210" exists ...
	I1210 00:04:22.719133   84259 config.go:182] Loaded profile config "default-k8s-diff-port-871210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:04:22.719134   84259 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-871210"
	I1210 00:04:22.719582   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.719631   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.719717   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.719728   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.719753   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.719763   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.720519   84259 out.go:177] * Verifying Kubernetes components...
	I1210 00:04:22.722140   84259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:04:22.735592   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44099
	I1210 00:04:22.735716   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36311
	I1210 00:04:22.736075   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.736100   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.736605   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.736626   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.736605   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.736642   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.736995   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.737007   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.737217   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.737595   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.737639   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.739278   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41255
	I1210 00:04:22.741315   84259 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-871210"
	W1210 00:04:22.741337   84259 addons.go:243] addon default-storageclass should already be in state true
	I1210 00:04:22.741367   84259 host.go:66] Checking if "default-k8s-diff-port-871210" exists ...
	I1210 00:04:22.741739   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.741778   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.742259   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.742829   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.742858   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.743182   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.743632   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.743663   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.754879   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1210 00:04:22.755310   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.756015   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.756032   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.756397   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.756576   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.758535   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1210 00:04:22.759514   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33065
	I1210 00:04:22.759891   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.759925   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39821
	I1210 00:04:22.760309   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.760330   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.760346   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.760435   84259 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 00:04:22.760689   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.760831   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.760845   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.760860   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.761166   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.761589   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.761627   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.761737   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 00:04:22.761754   84259 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 00:04:22.761774   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1210 00:04:22.762882   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1210 00:04:22.764440   84259 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:04:22.764810   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.765316   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1210 00:04:22.765339   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.765474   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1210 00:04:22.765675   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1210 00:04:22.765828   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1210 00:04:22.765894   84259 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:04:22.765912   84259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:04:22.765919   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1210 00:04:22.765934   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1210 00:04:22.768979   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.769446   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1210 00:04:22.769498   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.769626   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1210 00:04:22.769832   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1210 00:04:22.769956   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1210 00:04:22.770078   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1210 00:04:22.780995   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45841
	I1210 00:04:22.781451   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.782077   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.782098   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.782467   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.782705   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.784771   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1210 00:04:22.785015   84259 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:04:22.785030   84259 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:04:22.785047   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1210 00:04:22.788208   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.788659   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1210 00:04:22.788690   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.788771   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1210 00:04:22.789148   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1210 00:04:22.789275   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1210 00:04:22.789386   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1210 00:04:22.926076   84259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:04:22.945059   84259 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-871210" to be "Ready" ...
	I1210 00:04:22.970684   84259 node_ready.go:49] node "default-k8s-diff-port-871210" has status "Ready":"True"
	I1210 00:04:22.970727   84259 node_ready.go:38] duration metric: took 25.618738ms for node "default-k8s-diff-port-871210" to be "Ready" ...
	I1210 00:04:22.970740   84259 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:04:22.989661   84259 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:23.045411   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 00:04:23.045433   84259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 00:04:23.074907   84259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:04:23.076857   84259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:04:23.094809   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 00:04:23.094836   84259 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 00:04:23.125107   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:04:23.125136   84259 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 00:04:23.174286   84259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:04:23.554521   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554556   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.554568   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554589   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.554855   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.554871   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.554880   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554882   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.554888   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.554893   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.554891   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.554903   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554913   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.555087   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.555099   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.555283   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.555354   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.555379   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.579741   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.579775   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.580075   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.580091   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.580113   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.776674   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.776701   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.777020   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.777060   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.777068   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.777076   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.777089   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.777314   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.777330   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.777347   84259 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-871210"
	I1210 00:04:23.778992   84259 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 00:04:23.198263   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:25.198299   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:23.780354   84259 addons.go:510] duration metric: took 1.061403814s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 00:04:25.012490   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:27.495987   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:27.697345   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:29.698123   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:31.692183   83859 pod_ready.go:82] duration metric: took 4m0.000797786s for pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace to be "Ready" ...
	E1210 00:04:31.692211   83859 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 00:04:31.692228   83859 pod_ready.go:39] duration metric: took 4m11.541153015s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:04:31.692258   83859 kubeadm.go:597] duration metric: took 4m19.334550967s to restartPrimaryControlPlane
	W1210 00:04:31.692305   83859 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 00:04:31.692329   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:04:29.995452   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:31.997927   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:34.496689   84259 pod_ready.go:93] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.496716   84259 pod_ready.go:82] duration metric: took 11.507027662s for pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.496730   84259 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.501166   84259 pod_ready.go:93] pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.501188   84259 pod_ready.go:82] duration metric: took 4.45016ms for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.501198   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.506061   84259 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.506085   84259 pod_ready.go:82] duration metric: took 4.881919ms for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.506096   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.510401   84259 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.510422   84259 pod_ready.go:82] duration metric: took 4.320761ms for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.510432   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pj85d" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.514319   84259 pod_ready.go:93] pod "kube-proxy-pj85d" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.514340   84259 pod_ready.go:82] duration metric: took 3.902541ms for pod "kube-proxy-pj85d" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.514348   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.896369   84259 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.896393   84259 pod_ready.go:82] duration metric: took 382.038726ms for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.896401   84259 pod_ready.go:39] duration metric: took 11.925650242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:04:34.896415   84259 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:04:34.896466   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:04:34.910589   84259 api_server.go:72] duration metric: took 12.191654524s to wait for apiserver process to appear ...
	I1210 00:04:34.910617   84259 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:04:34.910639   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1210 00:04:34.916503   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I1210 00:04:34.917955   84259 api_server.go:141] control plane version: v1.31.2
	I1210 00:04:34.917979   84259 api_server.go:131] duration metric: took 7.355946ms to wait for apiserver health ...
	I1210 00:04:34.917987   84259 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:04:35.098220   84259 system_pods.go:59] 9 kube-system pods found
	I1210 00:04:35.098252   84259 system_pods.go:61] "coredns-7c65d6cfc9-7xpcc" [6cff7e56-1785-41c8-bd9c-db9d3f0bd05f] Running
	I1210 00:04:35.098259   84259 system_pods.go:61] "coredns-7c65d6cfc9-z2n25" [b6b81952-7281-4705-9536-06eb939a5807] Running
	I1210 00:04:35.098265   84259 system_pods.go:61] "etcd-default-k8s-diff-port-871210" [e9c747a1-72c0-4b30-b401-c39a41fe5eb5] Running
	I1210 00:04:35.098271   84259 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-871210" [1a27b619-b576-4a36-b88a-0c498ad38628] Running
	I1210 00:04:35.098276   84259 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-871210" [d22a69b7-3389-46a9-9ca5-a3101aa34e05] Running
	I1210 00:04:35.098280   84259 system_pods.go:61] "kube-proxy-pj85d" [d1b9b056-f4a3-419c-86fa-a94d88464f74] Running
	I1210 00:04:35.098285   84259 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-871210" [50a03a5c-d0e5-454a-9a7b-49963ff59d06] Running
	I1210 00:04:35.098294   84259 system_pods.go:61] "metrics-server-6867b74b74-7g2qm" [49ac129a-c85d-4af1-b3b2-06bc10bced77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:04:35.098298   84259 system_pods.go:61] "storage-provisioner" [ea716edd-4030-4ec3-b094-c3a50154b473] Running
	I1210 00:04:35.098309   84259 system_pods.go:74] duration metric: took 180.315426ms to wait for pod list to return data ...
	I1210 00:04:35.098322   84259 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:04:35.294342   84259 default_sa.go:45] found service account: "default"
	I1210 00:04:35.294367   84259 default_sa.go:55] duration metric: took 196.039183ms for default service account to be created ...
	I1210 00:04:35.294376   84259 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:04:35.497331   84259 system_pods.go:86] 9 kube-system pods found
	I1210 00:04:35.497364   84259 system_pods.go:89] "coredns-7c65d6cfc9-7xpcc" [6cff7e56-1785-41c8-bd9c-db9d3f0bd05f] Running
	I1210 00:04:35.497369   84259 system_pods.go:89] "coredns-7c65d6cfc9-z2n25" [b6b81952-7281-4705-9536-06eb939a5807] Running
	I1210 00:04:35.497373   84259 system_pods.go:89] "etcd-default-k8s-diff-port-871210" [e9c747a1-72c0-4b30-b401-c39a41fe5eb5] Running
	I1210 00:04:35.497377   84259 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-871210" [1a27b619-b576-4a36-b88a-0c498ad38628] Running
	I1210 00:04:35.497381   84259 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-871210" [d22a69b7-3389-46a9-9ca5-a3101aa34e05] Running
	I1210 00:04:35.497384   84259 system_pods.go:89] "kube-proxy-pj85d" [d1b9b056-f4a3-419c-86fa-a94d88464f74] Running
	I1210 00:04:35.497387   84259 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-871210" [50a03a5c-d0e5-454a-9a7b-49963ff59d06] Running
	I1210 00:04:35.497396   84259 system_pods.go:89] "metrics-server-6867b74b74-7g2qm" [49ac129a-c85d-4af1-b3b2-06bc10bced77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:04:35.497400   84259 system_pods.go:89] "storage-provisioner" [ea716edd-4030-4ec3-b094-c3a50154b473] Running
	I1210 00:04:35.497409   84259 system_pods.go:126] duration metric: took 203.02694ms to wait for k8s-apps to be running ...
	I1210 00:04:35.497416   84259 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:04:35.497456   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:04:35.512217   84259 system_svc.go:56] duration metric: took 14.792056ms WaitForService to wait for kubelet
	I1210 00:04:35.512248   84259 kubeadm.go:582] duration metric: took 12.793318604s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:04:35.512274   84259 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:04:35.695292   84259 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:04:35.695318   84259 node_conditions.go:123] node cpu capacity is 2
	I1210 00:04:35.695329   84259 node_conditions.go:105] duration metric: took 183.048181ms to run NodePressure ...
	I1210 00:04:35.695341   84259 start.go:241] waiting for startup goroutines ...
	I1210 00:04:35.695348   84259 start.go:246] waiting for cluster config update ...
	I1210 00:04:35.695361   84259 start.go:255] writing updated cluster config ...
	I1210 00:04:35.695666   84259 ssh_runner.go:195] Run: rm -f paused
	I1210 00:04:35.742539   84259 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:04:35.744394   84259 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-871210" cluster and "default" namespace by default
	I1210 00:04:57.851348   83859 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.159001958s)
	I1210 00:04:57.851413   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:04:57.866601   83859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:04:57.876643   83859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:04:57.886172   83859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:04:57.886200   83859 kubeadm.go:157] found existing configuration files:
	
	I1210 00:04:57.886252   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:04:57.895643   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:04:57.895722   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:04:57.905397   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:04:57.914236   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:04:57.914299   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:04:57.923225   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:04:57.932422   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:04:57.932476   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:04:57.942840   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:04:57.952087   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:04:57.952159   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:04:57.961371   83859 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:04:58.005314   83859 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 00:04:58.005444   83859 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:04:58.098287   83859 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:04:58.098431   83859 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:04:58.098591   83859 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 00:04:58.106525   83859 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:04:58.109142   83859 out.go:235]   - Generating certificates and keys ...
	I1210 00:04:58.109219   83859 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:04:58.109320   83859 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:04:58.109456   83859 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:04:58.109536   83859 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:04:58.109637   83859 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:04:58.109728   83859 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:04:58.109840   83859 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:04:58.109940   83859 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:04:58.110047   83859 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:04:58.110152   83859 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:04:58.110225   83859 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:04:58.110296   83859 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:04:58.357649   83859 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:04:58.505840   83859 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 00:04:58.890560   83859 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:04:58.965928   83859 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:04:59.341665   83859 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:04:59.342240   83859 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:04:59.344644   83859 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:04:59.346225   83859 out.go:235]   - Booting up control plane ...
	I1210 00:04:59.346353   83859 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:04:59.348071   83859 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:04:59.348893   83859 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:04:59.366824   83859 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:04:59.372906   83859 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:04:59.372962   83859 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:04:59.509554   83859 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 00:04:59.509695   83859 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 00:05:00.014472   83859 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.085309ms
	I1210 00:05:00.014639   83859 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 00:05:05.017162   83859 kubeadm.go:310] [api-check] The API server is healthy after 5.002677832s
	I1210 00:05:05.029078   83859 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 00:05:05.048524   83859 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 00:05:05.080458   83859 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 00:05:05.080735   83859 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-048296 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 00:05:05.091793   83859 kubeadm.go:310] [bootstrap-token] Using token: y5jzuu.af7syybsclcivlzq
	I1210 00:05:05.093253   83859 out.go:235]   - Configuring RBAC rules ...
	I1210 00:05:05.093401   83859 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 00:05:05.098842   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 00:05:05.106341   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 00:05:05.109935   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 00:05:05.114403   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 00:05:05.121436   83859 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 00:05:05.424690   83859 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 00:05:05.851096   83859 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 00:05:06.425724   83859 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 00:05:06.426713   83859 kubeadm.go:310] 
	I1210 00:05:06.426785   83859 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 00:05:06.426808   83859 kubeadm.go:310] 
	I1210 00:05:06.426904   83859 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 00:05:06.426936   83859 kubeadm.go:310] 
	I1210 00:05:06.426981   83859 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 00:05:06.427061   83859 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 00:05:06.427110   83859 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 00:05:06.427120   83859 kubeadm.go:310] 
	I1210 00:05:06.427199   83859 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 00:05:06.427210   83859 kubeadm.go:310] 
	I1210 00:05:06.427282   83859 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 00:05:06.427294   83859 kubeadm.go:310] 
	I1210 00:05:06.427381   83859 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 00:05:06.427486   83859 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 00:05:06.427623   83859 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 00:05:06.427635   83859 kubeadm.go:310] 
	I1210 00:05:06.427757   83859 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 00:05:06.427874   83859 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 00:05:06.427905   83859 kubeadm.go:310] 
	I1210 00:05:06.428032   83859 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y5jzuu.af7syybsclcivlzq \
	I1210 00:05:06.428167   83859 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b \
	I1210 00:05:06.428201   83859 kubeadm.go:310] 	--control-plane 
	I1210 00:05:06.428211   83859 kubeadm.go:310] 
	I1210 00:05:06.428322   83859 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 00:05:06.428332   83859 kubeadm.go:310] 
	I1210 00:05:06.428438   83859 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y5jzuu.af7syybsclcivlzq \
	I1210 00:05:06.428572   83859 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b 
	I1210 00:05:06.428746   83859 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:05:06.428832   83859 cni.go:84] Creating CNI manager for ""
	I1210 00:05:06.428849   83859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:05:06.431674   83859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:05:06.433006   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:05:06.444058   83859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:05:06.462707   83859 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:05:06.462838   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:06.462873   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-048296 minikube.k8s.io/updated_at=2024_12_10T00_05_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=no-preload-048296 minikube.k8s.io/primary=true
	I1210 00:05:06.493379   83859 ops.go:34] apiserver oom_adj: -16
	I1210 00:05:06.666080   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:07.166762   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:07.666408   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:08.166269   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:08.666734   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:09.166797   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:09.666522   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:10.166230   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:10.667061   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:10.774149   83859 kubeadm.go:1113] duration metric: took 4.311371383s to wait for elevateKubeSystemPrivileges
	I1210 00:05:10.774188   83859 kubeadm.go:394] duration metric: took 4m58.464009996s to StartCluster
	I1210 00:05:10.774211   83859 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:05:10.774290   83859 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1210 00:05:10.777711   83859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:05:10.778040   83859 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:05:10.778133   83859 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:05:10.778230   83859 addons.go:69] Setting storage-provisioner=true in profile "no-preload-048296"
	I1210 00:05:10.778247   83859 addons.go:234] Setting addon storage-provisioner=true in "no-preload-048296"
	W1210 00:05:10.778255   83859 addons.go:243] addon storage-provisioner should already be in state true
	I1210 00:05:10.778249   83859 addons.go:69] Setting default-storageclass=true in profile "no-preload-048296"
	I1210 00:05:10.778261   83859 addons.go:69] Setting metrics-server=true in profile "no-preload-048296"
	I1210 00:05:10.778276   83859 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-048296"
	I1210 00:05:10.778286   83859 host.go:66] Checking if "no-preload-048296" exists ...
	I1210 00:05:10.778299   83859 addons.go:234] Setting addon metrics-server=true in "no-preload-048296"
	I1210 00:05:10.778339   83859 config.go:182] Loaded profile config "no-preload-048296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1210 00:05:10.778394   83859 addons.go:243] addon metrics-server should already be in state true
	I1210 00:05:10.778446   83859 host.go:66] Checking if "no-preload-048296" exists ...
	I1210 00:05:10.778684   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.778716   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.778746   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.778721   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.778860   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.778897   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.779748   83859 out.go:177] * Verifying Kubernetes components...
	I1210 00:05:10.781467   83859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:05:10.795108   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34465
	I1210 00:05:10.795227   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I1210 00:05:10.795615   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.795709   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.796135   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.796160   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.796221   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.796240   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.796522   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.796539   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.797128   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.797162   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.797200   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.797167   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.797697   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44173
	I1210 00:05:10.798091   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.798587   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.798613   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.798982   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.799324   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.803238   83859 addons.go:234] Setting addon default-storageclass=true in "no-preload-048296"
	W1210 00:05:10.803262   83859 addons.go:243] addon default-storageclass should already be in state true
	I1210 00:05:10.803291   83859 host.go:66] Checking if "no-preload-048296" exists ...
	I1210 00:05:10.803683   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.803722   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.814000   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39657
	I1210 00:05:10.814046   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I1210 00:05:10.814470   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.814686   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.814992   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.815014   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.815427   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.815448   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.815515   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.815748   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.815974   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.816171   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.817989   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1210 00:05:10.818439   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1210 00:05:10.820342   83859 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 00:05:10.820360   83859 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:05:10.821535   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 00:05:10.821556   83859 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 00:05:10.821577   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1210 00:05:10.821652   83859 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:05:10.821673   83859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:05:10.821690   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1210 00:05:10.825763   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826205   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1210 00:05:10.826228   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42881
	I1210 00:05:10.826252   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826275   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1210 00:05:10.826294   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826327   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1210 00:05:10.826228   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826504   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1210 00:05:10.826563   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1210 00:05:10.826633   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.826711   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1210 00:05:10.826860   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1210 00:05:10.826864   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1210 00:05:10.827029   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1210 00:05:10.827152   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.827173   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.827241   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1210 00:05:10.827691   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.829421   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.829471   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.867902   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45189
	I1210 00:05:10.868566   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.869138   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.869169   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.869575   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.869782   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.871531   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1210 00:05:10.871792   83859 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:05:10.871810   83859 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:05:10.871832   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1210 00:05:10.874309   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.874822   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1210 00:05:10.874845   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.874999   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1210 00:05:10.875202   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1210 00:05:10.875354   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1210 00:05:10.875501   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1210 00:05:11.009426   83859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:05:11.030237   83859 node_ready.go:35] waiting up to 6m0s for node "no-preload-048296" to be "Ready" ...
	I1210 00:05:11.054344   83859 node_ready.go:49] node "no-preload-048296" has status "Ready":"True"
	I1210 00:05:11.054366   83859 node_ready.go:38] duration metric: took 24.096361ms for node "no-preload-048296" to be "Ready" ...
	I1210 00:05:11.054376   83859 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:05:11.071740   83859 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:11.123723   83859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:05:11.137744   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 00:05:11.137772   83859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 00:05:11.159680   83859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:05:11.179245   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 00:05:11.179267   83859 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 00:05:11.240553   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:05:11.240580   83859 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 00:05:11.311813   83859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:05:12.041427   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.041463   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.041509   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.041532   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.041886   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.041949   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.041954   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.041963   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.041967   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.041972   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.041932   83859 main.go:141] libmachine: (no-preload-048296) DBG | Closing plugin on server side
	I1210 00:05:12.041986   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.042087   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.042229   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.042245   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.042297   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.042319   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.092616   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.092639   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.092910   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.092923   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.535943   83859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.224081671s)
	I1210 00:05:12.536008   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.536020   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.536456   83859 main.go:141] libmachine: (no-preload-048296) DBG | Closing plugin on server side
	I1210 00:05:12.536459   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.536482   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.536491   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.536498   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.536728   83859 main.go:141] libmachine: (no-preload-048296) DBG | Closing plugin on server side
	I1210 00:05:12.536735   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.536750   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.536761   83859 addons.go:475] Verifying addon metrics-server=true in "no-preload-048296"
	I1210 00:05:12.538638   83859 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 00:05:12.539910   83859 addons.go:510] duration metric: took 1.761785575s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 00:05:13.078954   83859 pod_ready.go:103] pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:05:13.579737   83859 pod_ready.go:93] pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:13.579760   83859 pod_ready.go:82] duration metric: took 2.507994954s for pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:13.579770   83859 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8rxx7" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:14.586682   83859 pod_ready.go:93] pod "coredns-7c65d6cfc9-8rxx7" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:14.586704   83859 pod_ready.go:82] duration metric: took 1.006927767s for pod "coredns-7c65d6cfc9-8rxx7" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:14.586714   83859 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.092318   83859 pod_ready.go:93] pod "etcd-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.092345   83859 pod_ready.go:82] duration metric: took 505.624811ms for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.092356   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.096802   83859 pod_ready.go:93] pod "kube-apiserver-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.096828   83859 pod_ready.go:82] duration metric: took 4.463914ms for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.096839   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.100904   83859 pod_ready.go:93] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.100923   83859 pod_ready.go:82] duration metric: took 4.07832ms for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.100932   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qklxb" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.176527   83859 pod_ready.go:93] pod "kube-proxy-qklxb" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.176552   83859 pod_ready.go:82] duration metric: took 75.613294ms for pod "kube-proxy-qklxb" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.176562   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:17.182952   83859 pod_ready.go:103] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:05:18.181993   83859 pod_ready.go:93] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:18.182016   83859 pod_ready.go:82] duration metric: took 3.005447779s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:18.182024   83859 pod_ready.go:39] duration metric: took 7.127639413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:05:18.182038   83859 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:05:18.182084   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:05:18.196127   83859 api_server.go:72] duration metric: took 7.418043985s to wait for apiserver process to appear ...
	I1210 00:05:18.196155   83859 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:05:18.196176   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:05:18.200537   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 200:
	ok
	I1210 00:05:18.201476   83859 api_server.go:141] control plane version: v1.31.2
	I1210 00:05:18.201501   83859 api_server.go:131] duration metric: took 5.340199ms to wait for apiserver health ...
	I1210 00:05:18.201508   83859 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:05:18.208072   83859 system_pods.go:59] 9 kube-system pods found
	I1210 00:05:18.208105   83859 system_pods.go:61] "coredns-7c65d6cfc9-56djc" [2e25bad9-b88a-4ac8-a180-968bf6b057a2] Running
	I1210 00:05:18.208114   83859 system_pods.go:61] "coredns-7c65d6cfc9-8rxx7" [3fdfd41b-8ef7-41af-a703-ebecfe9ad319] Running
	I1210 00:05:18.208119   83859 system_pods.go:61] "etcd-no-preload-048296" [4447a7d6-df4b-4760-9352-4df0310017ee] Running
	I1210 00:05:18.208125   83859 system_pods.go:61] "kube-apiserver-no-preload-048296" [55a06182-1520-4497-8358-639438eeb297] Running
	I1210 00:05:18.208130   83859 system_pods.go:61] "kube-controller-manager-no-preload-048296" [f30bdb21-7742-46e7-90fe-403679f5e02a] Running
	I1210 00:05:18.208136   83859 system_pods.go:61] "kube-proxy-qklxb" [8bb029a1-abf9-4825-b9ec-0520a78cb3d8] Running
	I1210 00:05:18.208141   83859 system_pods.go:61] "kube-scheduler-no-preload-048296" [019d6d37-c586-4f12-8daf-b60752223cf1] Running
	I1210 00:05:18.208152   83859 system_pods.go:61] "metrics-server-6867b74b74-n2f8c" [8e9f56c9-fd67-4715-9148-1255be17f1fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:05:18.208163   83859 system_pods.go:61] "storage-provisioner" [55fe311f-4610-4805-9fb7-3f1cac7c96e6] Running
	I1210 00:05:18.208176   83859 system_pods.go:74] duration metric: took 6.661729ms to wait for pod list to return data ...
	I1210 00:05:18.208187   83859 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:05:18.375431   83859 default_sa.go:45] found service account: "default"
	I1210 00:05:18.375458   83859 default_sa.go:55] duration metric: took 167.260728ms for default service account to be created ...
	I1210 00:05:18.375467   83859 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:05:18.578148   83859 system_pods.go:86] 9 kube-system pods found
	I1210 00:05:18.578176   83859 system_pods.go:89] "coredns-7c65d6cfc9-56djc" [2e25bad9-b88a-4ac8-a180-968bf6b057a2] Running
	I1210 00:05:18.578183   83859 system_pods.go:89] "coredns-7c65d6cfc9-8rxx7" [3fdfd41b-8ef7-41af-a703-ebecfe9ad319] Running
	I1210 00:05:18.578187   83859 system_pods.go:89] "etcd-no-preload-048296" [4447a7d6-df4b-4760-9352-4df0310017ee] Running
	I1210 00:05:18.578191   83859 system_pods.go:89] "kube-apiserver-no-preload-048296" [55a06182-1520-4497-8358-639438eeb297] Running
	I1210 00:05:18.578195   83859 system_pods.go:89] "kube-controller-manager-no-preload-048296" [f30bdb21-7742-46e7-90fe-403679f5e02a] Running
	I1210 00:05:18.578198   83859 system_pods.go:89] "kube-proxy-qklxb" [8bb029a1-abf9-4825-b9ec-0520a78cb3d8] Running
	I1210 00:05:18.578203   83859 system_pods.go:89] "kube-scheduler-no-preload-048296" [019d6d37-c586-4f12-8daf-b60752223cf1] Running
	I1210 00:05:18.578209   83859 system_pods.go:89] "metrics-server-6867b74b74-n2f8c" [8e9f56c9-fd67-4715-9148-1255be17f1fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:05:18.578213   83859 system_pods.go:89] "storage-provisioner" [55fe311f-4610-4805-9fb7-3f1cac7c96e6] Running
	I1210 00:05:18.578223   83859 system_pods.go:126] duration metric: took 202.750724ms to wait for k8s-apps to be running ...
	I1210 00:05:18.578233   83859 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:05:18.578277   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:05:18.592635   83859 system_svc.go:56] duration metric: took 14.391582ms WaitForService to wait for kubelet
	I1210 00:05:18.592669   83859 kubeadm.go:582] duration metric: took 7.814589832s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:05:18.592690   83859 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:05:18.776611   83859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:05:18.776632   83859 node_conditions.go:123] node cpu capacity is 2
	I1210 00:05:18.776642   83859 node_conditions.go:105] duration metric: took 183.94737ms to run NodePressure ...
	I1210 00:05:18.776654   83859 start.go:241] waiting for startup goroutines ...
	I1210 00:05:18.776660   83859 start.go:246] waiting for cluster config update ...
	I1210 00:05:18.776672   83859 start.go:255] writing updated cluster config ...
	I1210 00:05:18.776944   83859 ssh_runner.go:195] Run: rm -f paused
	I1210 00:05:18.826550   83859 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:05:18.828405   83859 out.go:177] * Done! kubectl is now configured to use "no-preload-048296" cluster and "default" namespace by default
	I1210 00:05:43.088214   84547 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 00:05:43.088352   84547 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 00:05:43.089912   84547 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 00:05:43.089988   84547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:05:43.090054   84547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:05:43.090140   84547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:05:43.090225   84547 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 00:05:43.090302   84547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:05:43.092050   84547 out.go:235]   - Generating certificates and keys ...
	I1210 00:05:43.092141   84547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:05:43.092210   84547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:05:43.092305   84547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:05:43.092392   84547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:05:43.092493   84547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:05:43.092569   84547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:05:43.092680   84547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:05:43.092761   84547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:05:43.092865   84547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:05:43.092975   84547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:05:43.093045   84547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:05:43.093143   84547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:05:43.093188   84547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:05:43.093236   84547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:05:43.093317   84547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:05:43.093402   84547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:05:43.093561   84547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:05:43.093728   84547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:05:43.093785   84547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:05:43.093855   84547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:05:43.095285   84547 out.go:235]   - Booting up control plane ...
	I1210 00:05:43.095396   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:05:43.095469   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:05:43.095525   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:05:43.095630   84547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:05:43.095804   84547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 00:05:43.095873   84547 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 00:05:43.095960   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096155   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096237   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096414   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096486   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096679   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096741   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096904   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096969   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.097122   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.097129   84547 kubeadm.go:310] 
	I1210 00:05:43.097167   84547 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 00:05:43.097202   84547 kubeadm.go:310] 		timed out waiting for the condition
	I1210 00:05:43.097211   84547 kubeadm.go:310] 
	I1210 00:05:43.097251   84547 kubeadm.go:310] 	This error is likely caused by:
	I1210 00:05:43.097280   84547 kubeadm.go:310] 		- The kubelet is not running
	I1210 00:05:43.097373   84547 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 00:05:43.097379   84547 kubeadm.go:310] 
	I1210 00:05:43.097465   84547 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 00:05:43.097495   84547 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 00:05:43.097522   84547 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 00:05:43.097531   84547 kubeadm.go:310] 
	I1210 00:05:43.097619   84547 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 00:05:43.097688   84547 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 00:05:43.097694   84547 kubeadm.go:310] 
	I1210 00:05:43.097794   84547 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 00:05:43.097875   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 00:05:43.097941   84547 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 00:05:43.098038   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 00:05:43.098124   84547 kubeadm.go:310] 
	W1210 00:05:43.098179   84547 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 00:05:43.098224   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:05:48.532537   84547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.434287494s)
	I1210 00:05:48.532615   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:05:48.546394   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:05:48.555650   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:05:48.555669   84547 kubeadm.go:157] found existing configuration files:
	
	I1210 00:05:48.555721   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:05:48.565301   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:05:48.565368   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:05:48.575330   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:05:48.584238   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:05:48.584324   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:05:48.593395   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:05:48.602150   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:05:48.602209   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:05:48.611177   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:05:48.619837   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:05:48.619907   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:05:48.629126   84547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:05:48.827680   84547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:07:44.857816   84547 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 00:07:44.857930   84547 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 00:07:44.859490   84547 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 00:07:44.859542   84547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:07:44.859627   84547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:07:44.859758   84547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:07:44.859894   84547 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 00:07:44.859953   84547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:07:44.861538   84547 out.go:235]   - Generating certificates and keys ...
	I1210 00:07:44.861657   84547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:07:44.861736   84547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:07:44.861809   84547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:07:44.861861   84547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:07:44.861920   84547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:07:44.861969   84547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:07:44.862024   84547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:07:44.862077   84547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:07:44.862143   84547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:07:44.862212   84547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:07:44.862246   84547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:07:44.862293   84547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:07:44.862343   84547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:07:44.862408   84547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:07:44.862505   84547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:07:44.862615   84547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:07:44.862765   84547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:07:44.862897   84547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:07:44.862955   84547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:07:44.863059   84547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:07:44.865485   84547 out.go:235]   - Booting up control plane ...
	I1210 00:07:44.865595   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:07:44.865720   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:07:44.865815   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:07:44.865929   84547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:07:44.866136   84547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 00:07:44.866209   84547 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 00:07:44.866332   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866517   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.866586   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866752   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.866818   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866976   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867042   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.867210   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867323   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.867512   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867522   84547 kubeadm.go:310] 
	I1210 00:07:44.867611   84547 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 00:07:44.867671   84547 kubeadm.go:310] 		timed out waiting for the condition
	I1210 00:07:44.867691   84547 kubeadm.go:310] 
	I1210 00:07:44.867729   84547 kubeadm.go:310] 	This error is likely caused by:
	I1210 00:07:44.867766   84547 kubeadm.go:310] 		- The kubelet is not running
	I1210 00:07:44.867880   84547 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 00:07:44.867895   84547 kubeadm.go:310] 
	I1210 00:07:44.868055   84547 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 00:07:44.868105   84547 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 00:07:44.868138   84547 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 00:07:44.868146   84547 kubeadm.go:310] 
	I1210 00:07:44.868250   84547 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 00:07:44.868354   84547 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 00:07:44.868362   84547 kubeadm.go:310] 
	I1210 00:07:44.868464   84547 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 00:07:44.868540   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 00:07:44.868606   84547 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 00:07:44.868689   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 00:07:44.868741   84547 kubeadm.go:310] 
	I1210 00:07:44.868762   84547 kubeadm.go:394] duration metric: took 8m1.943355039s to StartCluster
	I1210 00:07:44.868799   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:07:44.868852   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:07:44.906641   84547 cri.go:89] found id: ""
	I1210 00:07:44.906667   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.906675   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:07:44.906681   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:07:44.906734   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:07:44.942832   84547 cri.go:89] found id: ""
	I1210 00:07:44.942863   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.942872   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:07:44.942881   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:07:44.942945   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:07:44.978010   84547 cri.go:89] found id: ""
	I1210 00:07:44.978034   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.978042   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:07:44.978047   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:07:44.978108   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:07:45.017066   84547 cri.go:89] found id: ""
	I1210 00:07:45.017089   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.017097   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:07:45.017110   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:07:45.017172   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:07:45.049757   84547 cri.go:89] found id: ""
	I1210 00:07:45.049778   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.049786   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:07:45.049791   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:07:45.049842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:07:45.082754   84547 cri.go:89] found id: ""
	I1210 00:07:45.082789   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.082800   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:07:45.082807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:07:45.082933   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:07:45.117188   84547 cri.go:89] found id: ""
	I1210 00:07:45.117219   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.117227   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:07:45.117233   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:07:45.117302   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:07:45.153747   84547 cri.go:89] found id: ""
	I1210 00:07:45.153776   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.153785   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:07:45.153795   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:07:45.153810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:07:45.207625   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:07:45.207662   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:07:45.220929   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:07:45.220957   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:07:45.291850   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:07:45.291883   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:07:45.291899   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:07:45.397592   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:07:45.397629   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 00:07:45.446755   84547 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1210 00:07:45.446816   84547 out.go:270] * 
	W1210 00:07:45.446898   84547 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 00:07:45.446921   84547 out.go:270] * 
	W1210 00:07:45.448145   84547 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 00:07:45.452118   84547 out.go:201] 
	W1210 00:07:45.453335   84547 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 00:07:45.453398   84547 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 00:07:45.453438   84547 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 00:07:45.455704   84547 out.go:201] 
	
	
	==> CRI-O <==
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.632452322Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=3495a256-c202-4aa2-9cfd-a1ce1f08b0d1 name=/runtime.v1.RuntimeService/Status
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.632841560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a223e75c-df11-43bd-9771-74bece5984ac name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.632902842Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a223e75c-df11-43bd-9771-74bece5984ac name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.633086793Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7,PodSandboxId:2e0732572e8db1a23d92676b78887b86796aa4e9a205e92e43f12b5924d4563c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733788778622267808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5cabae9-bb71-4b5e-9a43-0dce4a0733a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d601f9e42631cd298cf241017ee607bd6dbf69fe78c62e3ebaa7145777b4e692,PodSandboxId:9058ab0a3a0d5c88566c9b0a1fcfd2211b1ae51f298823e2237bd15927c1cb2e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733788757643541081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 26d6fcf4-98a5-4f18-a823-b7b7ec824711,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71,PodSandboxId:f37397cbb45679df27a16597bcdf2406bdf0ead341764319962fb99089a4e879,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733788755456314249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvtlr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef5707d-5f06-46d0-809b-da79f79c49e5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1,PodSandboxId:2e0732572e8db1a23d92676b78887b86796aa4e9a205e92e43f12b5924d4563c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733788747751358728,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e5cabae9-bb71-4b5e-9a43-0dce4a0733a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994,PodSandboxId:9cbf654ac355f91a513e7db1e6577837c9b14c6d1c04b2ca3ba7a0c6ce45284a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733788747759104647,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rn6fg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6db02558-bfa6-4c5f-a120-aed13575b
273,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538,PodSandboxId:c2b66ef16a899ea5d355e9b3b1b65828d2e66f3b273db5a7309a51b051fee831,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733788744181097853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515071d8bf56c0
1b07af6d39852c2e11,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de,PodSandboxId:506e0e7ee92a7bd8873715c778d39d892fe1cb40b0ea85e427371641227754d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733788744177234994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcff97bb4f6a0cb96f1e10113a
260f34,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9,PodSandboxId:49f894377a222370019ddf4a558b814e79f0dec55facbff6954ead3e82919eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733788744156377984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823ae58953ea2563864dd528b6e2b2ba,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7,PodSandboxId:1ea97f8d6c2da0292213660bb42711c450ee9b0ab7057cb07c94bc586c84321e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733788744153273810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6860c965e2f880a34f73a811ad70073e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a223e75c-df11-43bd-9771-74bece5984ac name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.669886334Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=327513ee-f14d-4f84-92fd-917e98bf131e name=/runtime.v1.RuntimeService/Version
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.669966445Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=327513ee-f14d-4f84-92fd-917e98bf131e name=/runtime.v1.RuntimeService/Version
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.670865570Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d329c849-4c04-4b3a-b4e8-6f427551f86b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.671265929Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789557671245856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d329c849-4c04-4b3a-b4e8-6f427551f86b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.671895148Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d3c7bbb-a4f6-40e0-b181-62b090513894 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.671966775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d3c7bbb-a4f6-40e0-b181-62b090513894 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.672164566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7,PodSandboxId:2e0732572e8db1a23d92676b78887b86796aa4e9a205e92e43f12b5924d4563c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733788778622267808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5cabae9-bb71-4b5e-9a43-0dce4a0733a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d601f9e42631cd298cf241017ee607bd6dbf69fe78c62e3ebaa7145777b4e692,PodSandboxId:9058ab0a3a0d5c88566c9b0a1fcfd2211b1ae51f298823e2237bd15927c1cb2e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733788757643541081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 26d6fcf4-98a5-4f18-a823-b7b7ec824711,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71,PodSandboxId:f37397cbb45679df27a16597bcdf2406bdf0ead341764319962fb99089a4e879,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733788755456314249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvtlr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef5707d-5f06-46d0-809b-da79f79c49e5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1,PodSandboxId:2e0732572e8db1a23d92676b78887b86796aa4e9a205e92e43f12b5924d4563c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733788747751358728,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e5cabae9-bb71-4b5e-9a43-0dce4a0733a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994,PodSandboxId:9cbf654ac355f91a513e7db1e6577837c9b14c6d1c04b2ca3ba7a0c6ce45284a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733788747759104647,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rn6fg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6db02558-bfa6-4c5f-a120-aed13575b
273,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538,PodSandboxId:c2b66ef16a899ea5d355e9b3b1b65828d2e66f3b273db5a7309a51b051fee831,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733788744181097853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515071d8bf56c0
1b07af6d39852c2e11,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de,PodSandboxId:506e0e7ee92a7bd8873715c778d39d892fe1cb40b0ea85e427371641227754d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733788744177234994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcff97bb4f6a0cb96f1e10113a
260f34,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9,PodSandboxId:49f894377a222370019ddf4a558b814e79f0dec55facbff6954ead3e82919eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733788744156377984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823ae58953ea2563864dd528b6e2b2ba,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7,PodSandboxId:1ea97f8d6c2da0292213660bb42711c450ee9b0ab7057cb07c94bc586c84321e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733788744153273810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6860c965e2f880a34f73a811ad70073e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d3c7bbb-a4f6-40e0-b181-62b090513894 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.706691652Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed60594c-ad75-4964-8f75-0d85154f20f6 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.706779686Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed60594c-ad75-4964-8f75-0d85154f20f6 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.708223781Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6697b0f-fc38-4965-b58b-afe1f8b88e64 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.708661503Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789557708638552,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6697b0f-fc38-4965-b58b-afe1f8b88e64 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.709191677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e12b3da7-4d6c-4d6d-8bf5-8bf673de2093 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.709248738Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e12b3da7-4d6c-4d6d-8bf5-8bf673de2093 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.709699256Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7,PodSandboxId:2e0732572e8db1a23d92676b78887b86796aa4e9a205e92e43f12b5924d4563c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733788778622267808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5cabae9-bb71-4b5e-9a43-0dce4a0733a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d601f9e42631cd298cf241017ee607bd6dbf69fe78c62e3ebaa7145777b4e692,PodSandboxId:9058ab0a3a0d5c88566c9b0a1fcfd2211b1ae51f298823e2237bd15927c1cb2e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733788757643541081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 26d6fcf4-98a5-4f18-a823-b7b7ec824711,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71,PodSandboxId:f37397cbb45679df27a16597bcdf2406bdf0ead341764319962fb99089a4e879,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733788755456314249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvtlr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef5707d-5f06-46d0-809b-da79f79c49e5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1,PodSandboxId:2e0732572e8db1a23d92676b78887b86796aa4e9a205e92e43f12b5924d4563c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733788747751358728,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e5cabae9-bb71-4b5e-9a43-0dce4a0733a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994,PodSandboxId:9cbf654ac355f91a513e7db1e6577837c9b14c6d1c04b2ca3ba7a0c6ce45284a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733788747759104647,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rn6fg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6db02558-bfa6-4c5f-a120-aed13575b
273,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538,PodSandboxId:c2b66ef16a899ea5d355e9b3b1b65828d2e66f3b273db5a7309a51b051fee831,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733788744181097853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515071d8bf56c0
1b07af6d39852c2e11,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de,PodSandboxId:506e0e7ee92a7bd8873715c778d39d892fe1cb40b0ea85e427371641227754d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733788744177234994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcff97bb4f6a0cb96f1e10113a
260f34,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9,PodSandboxId:49f894377a222370019ddf4a558b814e79f0dec55facbff6954ead3e82919eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733788744156377984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823ae58953ea2563864dd528b6e2b2ba,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7,PodSandboxId:1ea97f8d6c2da0292213660bb42711c450ee9b0ab7057cb07c94bc586c84321e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733788744153273810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6860c965e2f880a34f73a811ad70073e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e12b3da7-4d6c-4d6d-8bf5-8bf673de2093 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.740011084Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9103b3b3-adfe-4255-8323-d3b67b1371b5 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.740083242Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9103b3b3-adfe-4255-8323-d3b67b1371b5 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.740924109Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=347ac174-74d9-43ee-b54d-0aaa9d72d034 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.741297169Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789557741276612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=347ac174-74d9-43ee-b54d-0aaa9d72d034 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.741764243Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d1c3bc2-9da4-4366-bb93-cfa6c607ce4e name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.741814686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d1c3bc2-9da4-4366-bb93-cfa6c607ce4e name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:37 embed-certs-825613 crio[689]: time="2024-12-10 00:12:37.742000012Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7,PodSandboxId:2e0732572e8db1a23d92676b78887b86796aa4e9a205e92e43f12b5924d4563c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733788778622267808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5cabae9-bb71-4b5e-9a43-0dce4a0733a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d601f9e42631cd298cf241017ee607bd6dbf69fe78c62e3ebaa7145777b4e692,PodSandboxId:9058ab0a3a0d5c88566c9b0a1fcfd2211b1ae51f298823e2237bd15927c1cb2e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733788757643541081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 26d6fcf4-98a5-4f18-a823-b7b7ec824711,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71,PodSandboxId:f37397cbb45679df27a16597bcdf2406bdf0ead341764319962fb99089a4e879,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733788755456314249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvtlr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef5707d-5f06-46d0-809b-da79f79c49e5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1,PodSandboxId:2e0732572e8db1a23d92676b78887b86796aa4e9a205e92e43f12b5924d4563c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733788747751358728,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e5cabae9-bb71-4b5e-9a43-0dce4a0733a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994,PodSandboxId:9cbf654ac355f91a513e7db1e6577837c9b14c6d1c04b2ca3ba7a0c6ce45284a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733788747759104647,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rn6fg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6db02558-bfa6-4c5f-a120-aed13575b
273,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538,PodSandboxId:c2b66ef16a899ea5d355e9b3b1b65828d2e66f3b273db5a7309a51b051fee831,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733788744181097853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515071d8bf56c0
1b07af6d39852c2e11,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de,PodSandboxId:506e0e7ee92a7bd8873715c778d39d892fe1cb40b0ea85e427371641227754d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733788744177234994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcff97bb4f6a0cb96f1e10113a
260f34,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9,PodSandboxId:49f894377a222370019ddf4a558b814e79f0dec55facbff6954ead3e82919eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733788744156377984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823ae58953ea2563864dd528b6e2b2ba,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7,PodSandboxId:1ea97f8d6c2da0292213660bb42711c450ee9b0ab7057cb07c94bc586c84321e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733788744153273810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6860c965e2f880a34f73a811ad70073e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d1c3bc2-9da4-4366-bb93-cfa6c607ce4e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b794fd5af2249       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   2e0732572e8db       storage-provisioner
	d601f9e42631c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   9058ab0a3a0d5       busybox
	db9231487d25e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   f37397cbb4567       coredns-7c65d6cfc9-qvtlr
	a17d14690e81c       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      13 minutes ago      Running             kube-proxy                1                   9cbf654ac355f       kube-proxy-rn6fg
	e6a287aaa2bb1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   2e0732572e8db       storage-provisioner
	a8a1911851cba       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      13 minutes ago      Running             kube-controller-manager   1                   c2b66ef16a899       kube-controller-manager-embed-certs-825613
	f251f2ec97259       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      13 minutes ago      Running             kube-scheduler            1                   506e0e7ee92a7       kube-scheduler-embed-certs-825613
	c641220f93efe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   49f894377a222       etcd-embed-certs-825613
	07b6833b28b16       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      13 minutes ago      Running             kube-apiserver            1                   1ea97f8d6c2da       kube-apiserver-embed-certs-825613
	
	
	==> coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:42846 - 8639 "HINFO IN 2605694768704771407.3649347858089209996. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.038925999s
	
	
	==> describe nodes <==
	Name:               embed-certs-825613
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-825613
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=embed-certs-825613
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T23_50_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 23:50:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-825613
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:12:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:09:49 +0000   Mon, 09 Dec 2024 23:50:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:09:49 +0000   Mon, 09 Dec 2024 23:50:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:09:49 +0000   Mon, 09 Dec 2024 23:50:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:09:49 +0000   Mon, 09 Dec 2024 23:59:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.19
	  Hostname:    embed-certs-825613
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e5553bfc98ff4251b26fadf70ee93ead
	  System UUID:                e5553bfc-98ff-4251-b26f-adf70ee93ead
	  Boot ID:                    3d98bdcb-9f0e-42c7-a111-540ec74aef73
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-7c65d6cfc9-qvtlr                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-embed-certs-825613                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-embed-certs-825613             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-embed-certs-825613    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-rn6fg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-825613             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-hg7c5               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-825613 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-825613 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-825613 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-825613 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-825613 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-825613 status is now: NodeHasSufficientPID
	  Normal  NodeReady                21m                kubelet          Node embed-certs-825613 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-825613 event: Registered Node embed-certs-825613 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-825613 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-825613 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-825613 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-825613 event: Registered Node embed-certs-825613 in Controller
	
	
	==> dmesg <==
	[Dec 9 23:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048892] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036533] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.833942] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.970754] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.547841] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.986658] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.064094] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063172] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.200745] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.100670] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.270843] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +4.039153] systemd-fstab-generator[772]: Ignoring "noauto" option for root device
	[Dec 9 23:59] systemd-fstab-generator[894]: Ignoring "noauto" option for root device
	[  +0.061094] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.504702] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.444205] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	[  +3.278360] kauditd_printk_skb: 64 callbacks suppressed
	[ +25.240683] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] <==
	{"level":"warn","ts":"2024-12-09T23:59:22.523914Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.904485ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:59:22.523935Z","caller":"traceutil/trace.go:171","msg":"trace[13496959] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:623; }","duration":"132.928145ms","start":"2024-12-09T23:59:22.391000Z","end":"2024-12-09T23:59:22.523928Z","steps":["trace[13496959] 'agreement among raft nodes before linearized reading'  (duration: 132.891482ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:59:23.419604Z","caller":"traceutil/trace.go:171","msg":"trace[335628539] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"349.194986ms","start":"2024-12-09T23:59:23.070391Z","end":"2024-12-09T23:59:23.419586Z","steps":["trace[335628539] 'process raft request'  (duration: 349.022141ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:59:23.420061Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T23:59:23.070373Z","time spent":"349.298102ms","remote":"127.0.0.1:36292","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4518,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-825613\" mod_revision:511 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-825613\" value_size:4450 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-825613\" > >"}
	{"level":"info","ts":"2024-12-09T23:59:23.420597Z","caller":"traceutil/trace.go:171","msg":"trace[303433401] linearizableReadLoop","detail":"{readStateIndex:669; appliedIndex:669; }","duration":"220.931521ms","start":"2024-12-09T23:59:23.199652Z","end":"2024-12-09T23:59:23.420584Z","steps":["trace[303433401] 'read index received'  (duration: 220.921146ms)","trace[303433401] 'applied index is now lower than readState.Index'  (duration: 9.429µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T23:59:23.420711Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.069399ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:59:23.420756Z","caller":"traceutil/trace.go:171","msg":"trace[1263770303] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:624; }","duration":"221.12025ms","start":"2024-12-09T23:59:23.199625Z","end":"2024-12-09T23:59:23.420745Z","steps":["trace[1263770303] 'agreement among raft nodes before linearized reading'  (duration: 221.05274ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:59:23.420944Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.631501ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-825613\" ","response":"range_response_count:1 size:4533"}
	{"level":"info","ts":"2024-12-09T23:59:23.421206Z","caller":"traceutil/trace.go:171","msg":"trace[1455931536] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-embed-certs-825613; range_end:; response_count:1; response_revision:624; }","duration":"108.890958ms","start":"2024-12-09T23:59:23.312303Z","end":"2024-12-09T23:59:23.421194Z","steps":["trace[1455931536] 'agreement among raft nodes before linearized reading'  (duration: 108.609634ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:59:23.952762Z","caller":"traceutil/trace.go:171","msg":"trace[1671691036] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"520.173296ms","start":"2024-12-09T23:59:23.432573Z","end":"2024-12-09T23:59:23.952746Z","steps":["trace[1671691036] 'process raft request'  (duration: 519.718959ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:59:23.952892Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T23:59:23.432555Z","time spent":"520.28452ms","remote":"127.0.0.1:36292","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4326,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-825613\" mod_revision:624 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-825613\" value_size:4258 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-825613\" > >"}
	{"level":"info","ts":"2024-12-09T23:59:23.952409Z","caller":"traceutil/trace.go:171","msg":"trace[69263747] linearizableReadLoop","detail":"{readStateIndex:670; appliedIndex:669; }","duration":"428.076212ms","start":"2024-12-09T23:59:23.524321Z","end":"2024-12-09T23:59:23.952397Z","steps":["trace[69263747] 'read index received'  (duration: 427.902534ms)","trace[69263747] 'applied index is now lower than readState.Index'  (duration: 173.205µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T23:59:23.953523Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.452198ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-825613\" ","response":"range_response_count:1 size:4341"}
	{"level":"info","ts":"2024-12-09T23:59:23.953565Z","caller":"traceutil/trace.go:171","msg":"trace[457871556] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-embed-certs-825613; range_end:; response_count:1; response_revision:625; }","duration":"141.498717ms","start":"2024-12-09T23:59:23.812057Z","end":"2024-12-09T23:59:23.953556Z","steps":["trace[457871556] 'agreement among raft nodes before linearized reading'  (duration: 141.431414ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:59:43.832508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.861654ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12394631499418903644 > lease_revoke:<id:2c0293addcbb23d4>","response":"size:28"}
	{"level":"info","ts":"2024-12-10T00:00:00.481927Z","caller":"traceutil/trace.go:171","msg":"trace[1606349903] linearizableReadLoop","detail":"{readStateIndex:711; appliedIndex:710; }","duration":"283.177375ms","start":"2024-12-10T00:00:00.198735Z","end":"2024-12-10T00:00:00.481913Z","steps":["trace[1606349903] 'read index received'  (duration: 283.019224ms)","trace[1606349903] 'applied index is now lower than readState.Index'  (duration: 157.548µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-10T00:00:00.482244Z","caller":"traceutil/trace.go:171","msg":"trace[644595571] transaction","detail":"{read_only:false; response_revision:658; number_of_response:1; }","duration":"308.033134ms","start":"2024-12-10T00:00:00.174199Z","end":"2024-12-10T00:00:00.482232Z","steps":["trace[644595571] 'process raft request'  (duration: 307.595037ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-10T00:00:00.482372Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-10T00:00:00.174182Z","time spent":"308.120337ms","remote":"127.0.0.1:36282","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:654 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-12-10T00:00:00.482571Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"283.828056ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-10T00:00:00.482633Z","caller":"traceutil/trace.go:171","msg":"trace[1900332290] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:658; }","duration":"283.893994ms","start":"2024-12-10T00:00:00.198730Z","end":"2024-12-10T00:00:00.482624Z","steps":["trace[1900332290] 'agreement among raft nodes before linearized reading'  (duration: 283.814839ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-10T00:00:08.593933Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.832249ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-hg7c5\" ","response":"range_response_count:1 size:4384"}
	{"level":"info","ts":"2024-12-10T00:00:08.594260Z","caller":"traceutil/trace.go:171","msg":"trace[401372728] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-hg7c5; range_end:; response_count:1; response_revision:666; }","duration":"128.168658ms","start":"2024-12-10T00:00:08.466077Z","end":"2024-12-10T00:00:08.594246Z","steps":["trace[401372728] 'range keys from in-memory index tree'  (duration: 127.676583ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-10T00:09:05.820537Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":868}
	{"level":"info","ts":"2024-12-10T00:09:05.830778Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":868,"took":"9.890796ms","hash":2296338318,"current-db-size-bytes":2629632,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2629632,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-12-10T00:09:05.830937Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2296338318,"revision":868,"compact-revision":-1}
	
	
	==> kernel <==
	 00:12:38 up 14 min,  0 users,  load average: 0.19, 0.17, 0.13
	Linux embed-certs-825613 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] <==
	W1210 00:09:08.025706       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:09:08.025758       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 00:09:08.026752       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 00:09:08.026789       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 00:10:08.026922       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:10:08.027119       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1210 00:10:08.026947       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:10:08.027218       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 00:10:08.028276       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 00:10:08.028302       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 00:12:08.028691       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:12:08.028864       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1210 00:12:08.028717       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:12:08.028966       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 00:12:08.030125       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 00:12:08.030151       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] <==
	E1210 00:07:10.544082       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:07:11.099248       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:07:40.550144       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:07:41.106580       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:08:10.556385       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:08:11.114208       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:08:40.561353       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:08:41.123270       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:09:10.568044       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:09:11.130167       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:09:40.573281       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:09:41.137393       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 00:09:49.823691       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-825613"
	E1210 00:10:10.579147       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:10:11.144259       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 00:10:23.411629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="87.626µs"
	I1210 00:10:35.411976       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="59.861µs"
	E1210 00:10:40.585090       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:10:41.151246       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:11:10.591218       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:11:11.159383       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:11:40.597443       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:11:41.166214       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:12:10.603950       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:12:11.172746       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 23:59:07.944840       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 23:59:07.963596       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.19"]
	E1209 23:59:07.963750       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 23:59:08.006910       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 23:59:08.006996       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 23:59:08.007049       1 server_linux.go:169] "Using iptables Proxier"
	I1209 23:59:08.009177       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 23:59:08.009411       1 server.go:483] "Version info" version="v1.31.2"
	I1209 23:59:08.009589       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:59:08.010770       1 config.go:199] "Starting service config controller"
	I1209 23:59:08.010820       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 23:59:08.010866       1 config.go:105] "Starting endpoint slice config controller"
	I1209 23:59:08.010882       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 23:59:08.011338       1 config.go:328] "Starting node config controller"
	I1209 23:59:08.011374       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 23:59:08.111145       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 23:59:08.111283       1 shared_informer.go:320] Caches are synced for service config
	I1209 23:59:08.111826       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] <==
	I1209 23:59:04.894213       1 serving.go:386] Generated self-signed cert in-memory
	W1209 23:59:06.947120       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 23:59:06.947157       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 23:59:06.947213       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 23:59:06.947222       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 23:59:07.053355       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1209 23:59:07.053429       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:59:07.082056       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1209 23:59:07.078190       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 23:59:07.082818       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 23:59:07.083198       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 23:59:07.183067       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 00:11:26 embed-certs-825613 kubelet[901]: E1210 00:11:26.398360     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hg7c5" podUID="2a657b1b-4435-42b5-aef2-deebf7865c83"
	Dec 10 00:11:32 embed-certs-825613 kubelet[901]: E1210 00:11:32.555381     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789492555081668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:32 embed-certs-825613 kubelet[901]: E1210 00:11:32.555434     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789492555081668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:37 embed-certs-825613 kubelet[901]: E1210 00:11:37.399371     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hg7c5" podUID="2a657b1b-4435-42b5-aef2-deebf7865c83"
	Dec 10 00:11:42 embed-certs-825613 kubelet[901]: E1210 00:11:42.557246     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789502556969764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:42 embed-certs-825613 kubelet[901]: E1210 00:11:42.557591     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789502556969764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:48 embed-certs-825613 kubelet[901]: E1210 00:11:48.398767     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hg7c5" podUID="2a657b1b-4435-42b5-aef2-deebf7865c83"
	Dec 10 00:11:52 embed-certs-825613 kubelet[901]: E1210 00:11:52.559425     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789512559066721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:52 embed-certs-825613 kubelet[901]: E1210 00:11:52.559501     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789512559066721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:59 embed-certs-825613 kubelet[901]: E1210 00:11:59.398611     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hg7c5" podUID="2a657b1b-4435-42b5-aef2-deebf7865c83"
	Dec 10 00:12:02 embed-certs-825613 kubelet[901]: E1210 00:12:02.416155     901 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 10 00:12:02 embed-certs-825613 kubelet[901]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 10 00:12:02 embed-certs-825613 kubelet[901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 10 00:12:02 embed-certs-825613 kubelet[901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 00:12:02 embed-certs-825613 kubelet[901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 00:12:02 embed-certs-825613 kubelet[901]: E1210 00:12:02.561311     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789522561095682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:02 embed-certs-825613 kubelet[901]: E1210 00:12:02.561334     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789522561095682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:12 embed-certs-825613 kubelet[901]: E1210 00:12:12.399251     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hg7c5" podUID="2a657b1b-4435-42b5-aef2-deebf7865c83"
	Dec 10 00:12:12 embed-certs-825613 kubelet[901]: E1210 00:12:12.563001     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789532562657470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:12 embed-certs-825613 kubelet[901]: E1210 00:12:12.563040     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789532562657470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:22 embed-certs-825613 kubelet[901]: E1210 00:12:22.564597     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789542563904893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:22 embed-certs-825613 kubelet[901]: E1210 00:12:22.564633     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789542563904893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:27 embed-certs-825613 kubelet[901]: E1210 00:12:27.398463     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hg7c5" podUID="2a657b1b-4435-42b5-aef2-deebf7865c83"
	Dec 10 00:12:32 embed-certs-825613 kubelet[901]: E1210 00:12:32.565834     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789552565593371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:32 embed-certs-825613 kubelet[901]: E1210 00:12:32.565895     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789552565593371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] <==
	I1209 23:59:38.733523       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 23:59:38.744625       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 23:59:38.744742       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 23:59:56.145961       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 23:59:56.146265       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-825613_c134b788-3d5c-4519-a5bc-6e8be741a5d7!
	I1209 23:59:56.148163       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bd7f2198-abd8-43b4-9ad3-a2585364fc90", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-825613_c134b788-3d5c-4519-a5bc-6e8be741a5d7 became leader
	I1209 23:59:56.246592       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-825613_c134b788-3d5c-4519-a5bc-6e8be741a5d7!
	
	
	==> storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] <==
	I1209 23:59:07.855371       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1209 23:59:37.858715       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-825613 -n embed-certs-825613
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-825613 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-hg7c5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-825613 describe pod metrics-server-6867b74b74-hg7c5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-825613 describe pod metrics-server-6867b74b74-hg7c5: exit status 1 (64.431121ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-hg7c5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-825613 describe pod metrics-server-6867b74b74-hg7c5: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1210 00:05:11.198584   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-871210 -n default-k8s-diff-port-871210
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-10 00:13:36.275114233 +0000 UTC m=+6100.706729338
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-871210 -n default-k8s-diff-port-871210
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-871210 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-871210 logs -n 25: (2.004205505s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-030585 sudo cat                              | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo find                             | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo crio                             | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-030585                                       | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-866797 | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | disable-driver-mounts-866797                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:52 UTC |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-048296             | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC | 09 Dec 24 23:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-825613            | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC | 09 Dec 24 23:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-825613                                  | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-871210  | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:52 UTC | 09 Dec 24 23:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:52 UTC |                     |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-720064        | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-048296                  | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-825613                 | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC | 10 Dec 24 00:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-825613                                  | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC | 10 Dec 24 00:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-871210       | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:54 UTC | 10 Dec 24 00:04 UTC |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-720064             | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:55:25
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:55:25.509412   84547 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:55:25.509516   84547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:25.509524   84547 out.go:358] Setting ErrFile to fd 2...
	I1209 23:55:25.509541   84547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:25.509720   84547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 23:55:25.510225   84547 out.go:352] Setting JSON to false
	I1209 23:55:25.511117   84547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9476,"bootTime":1733779049,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:55:25.511206   84547 start.go:139] virtualization: kvm guest
	I1209 23:55:25.513214   84547 out.go:177] * [old-k8s-version-720064] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:55:25.514823   84547 notify.go:220] Checking for updates...
	I1209 23:55:25.514845   84547 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:55:25.516067   84547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:55:25.517350   84547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:55:25.518678   84547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 23:55:25.520082   84547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:55:25.521432   84547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:55:25.523092   84547 config.go:182] Loaded profile config "old-k8s-version-720064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 23:55:25.523499   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:55:25.523548   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:55:25.538209   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36805
	I1209 23:55:25.538728   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:55:25.539263   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:55:25.539282   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:55:25.539570   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:55:25.539735   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:55:25.541550   84547 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1209 23:55:25.542864   84547 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:55:25.543159   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:55:25.543196   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:55:25.558672   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I1209 23:55:25.559075   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:55:25.559524   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:55:25.559547   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:55:25.559881   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:55:25.560082   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:55:25.595916   84547 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 23:55:25.597365   84547 start.go:297] selected driver: kvm2
	I1209 23:55:25.597380   84547 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:55:25.597473   84547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:55:25.598182   84547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:55:25.598251   84547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 23:55:25.612993   84547 install.go:137] /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1209 23:55:25.613401   84547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:55:25.613430   84547 cni.go:84] Creating CNI manager for ""
	I1209 23:55:25.613473   84547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:55:25.613509   84547 start.go:340] cluster config:
	{Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:55:25.613608   84547 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:55:25.615371   84547 out.go:177] * Starting "old-k8s-version-720064" primary control-plane node in "old-k8s-version-720064" cluster
	I1209 23:55:25.371778   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:25.616599   84547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:55:25.616629   84547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1209 23:55:25.616636   84547 cache.go:56] Caching tarball of preloaded images
	I1209 23:55:25.616716   84547 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 23:55:25.616727   84547 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1209 23:55:25.616809   84547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/config.json ...
	I1209 23:55:25.616984   84547 start.go:360] acquireMachinesLock for old-k8s-version-720064: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:55:28.439810   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:34.519811   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:37.591794   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:43.671858   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:46.743891   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:52.823826   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:55.895854   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:01.975845   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:05.047862   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:11.127840   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:14.199834   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:20.279853   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:23.351850   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:29.431829   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:32.503859   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:38.583822   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:41.655831   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:47.735829   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:50.807862   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:56.887827   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:59.959820   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:06.039852   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:09.111843   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:15.191823   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:18.263824   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:24.343852   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:27.415807   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:33.495843   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:36.567854   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:42.647854   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:45.719902   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:51.799842   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:54.871862   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:00.951877   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:04.023859   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:10.103869   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:13.175788   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:19.255805   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:22.327784   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:28.407842   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:31.479864   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:34.483630   83900 start.go:364] duration metric: took 4m35.142298634s to acquireMachinesLock for "embed-certs-825613"
	I1209 23:58:34.483690   83900 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:58:34.483698   83900 fix.go:54] fixHost starting: 
	I1209 23:58:34.484038   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:58:34.484080   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:58:34.499267   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I1209 23:58:34.499717   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:58:34.500208   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:58:34.500236   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:58:34.500552   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:58:34.500747   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:34.500886   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:58:34.502318   83900 fix.go:112] recreateIfNeeded on embed-certs-825613: state=Stopped err=<nil>
	I1209 23:58:34.502340   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	W1209 23:58:34.502480   83900 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:58:34.504290   83900 out.go:177] * Restarting existing kvm2 VM for "embed-certs-825613" ...
	I1209 23:58:34.481505   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:58:34.481545   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:58:34.481889   83859 buildroot.go:166] provisioning hostname "no-preload-048296"
	I1209 23:58:34.481917   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:58:34.482120   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:58:34.483492   83859 machine.go:96] duration metric: took 4m37.418278223s to provisionDockerMachine
	I1209 23:58:34.483534   83859 fix.go:56] duration metric: took 4m37.439725581s for fixHost
	I1209 23:58:34.483542   83859 start.go:83] releasing machines lock for "no-preload-048296", held for 4m37.439753726s
	W1209 23:58:34.483579   83859 start.go:714] error starting host: provision: host is not running
	W1209 23:58:34.483682   83859 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1209 23:58:34.483694   83859 start.go:729] Will try again in 5 seconds ...
	I1209 23:58:34.505520   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Start
	I1209 23:58:34.505676   83900 main.go:141] libmachine: (embed-certs-825613) Ensuring networks are active...
	I1209 23:58:34.506381   83900 main.go:141] libmachine: (embed-certs-825613) Ensuring network default is active
	I1209 23:58:34.506676   83900 main.go:141] libmachine: (embed-certs-825613) Ensuring network mk-embed-certs-825613 is active
	I1209 23:58:34.507046   83900 main.go:141] libmachine: (embed-certs-825613) Getting domain xml...
	I1209 23:58:34.507727   83900 main.go:141] libmachine: (embed-certs-825613) Creating domain...
	I1209 23:58:35.719784   83900 main.go:141] libmachine: (embed-certs-825613) Waiting to get IP...
	I1209 23:58:35.720797   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:35.721325   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:35.721377   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:35.721289   85186 retry.go:31] will retry after 251.225165ms: waiting for machine to come up
	I1209 23:58:35.973801   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:35.974247   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:35.974276   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:35.974205   85186 retry.go:31] will retry after 298.029679ms: waiting for machine to come up
	I1209 23:58:36.273658   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:36.274090   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:36.274119   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:36.274049   85186 retry.go:31] will retry after 467.217404ms: waiting for machine to come up
	I1209 23:58:36.742591   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:36.743027   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:36.743052   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:36.742982   85186 retry.go:31] will retry after 409.741731ms: waiting for machine to come up
	I1209 23:58:37.154543   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:37.154958   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:37.154982   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:37.154916   85186 retry.go:31] will retry after 723.02759ms: waiting for machine to come up
	I1209 23:58:37.878986   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:37.879474   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:37.879507   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:37.879423   85186 retry.go:31] will retry after 767.399861ms: waiting for machine to come up
	I1209 23:58:38.648416   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:38.648903   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:38.648927   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:38.648853   85186 retry.go:31] will retry after 1.06423499s: waiting for machine to come up
	I1209 23:58:39.485363   83859 start.go:360] acquireMachinesLock for no-preload-048296: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:58:39.714711   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:39.715114   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:39.715147   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:39.715058   85186 retry.go:31] will retry after 1.402884688s: waiting for machine to come up
	I1209 23:58:41.119828   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:41.120315   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:41.120359   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:41.120258   85186 retry.go:31] will retry after 1.339874475s: waiting for machine to come up
	I1209 23:58:42.461858   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:42.462314   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:42.462343   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:42.462261   85186 retry.go:31] will retry after 1.455809273s: waiting for machine to come up
	I1209 23:58:43.920097   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:43.920418   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:43.920448   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:43.920371   85186 retry.go:31] will retry after 2.872969061s: waiting for machine to come up
	I1209 23:58:46.796163   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:46.796477   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:46.796499   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:46.796439   85186 retry.go:31] will retry after 2.530677373s: waiting for machine to come up
	I1209 23:58:49.330146   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:49.330601   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:49.330631   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:49.330561   85186 retry.go:31] will retry after 4.492372507s: waiting for machine to come up
	I1209 23:58:53.827982   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.828461   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has current primary IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.828480   83900 main.go:141] libmachine: (embed-certs-825613) Found IP for machine: 192.168.50.19
	I1209 23:58:53.828492   83900 main.go:141] libmachine: (embed-certs-825613) Reserving static IP address...
	I1209 23:58:53.829001   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "embed-certs-825613", mac: "52:54:00:0f:2e:da", ip: "192.168.50.19"} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.829028   83900 main.go:141] libmachine: (embed-certs-825613) Reserved static IP address: 192.168.50.19
	I1209 23:58:53.829051   83900 main.go:141] libmachine: (embed-certs-825613) DBG | skip adding static IP to network mk-embed-certs-825613 - found existing host DHCP lease matching {name: "embed-certs-825613", mac: "52:54:00:0f:2e:da", ip: "192.168.50.19"}
	I1209 23:58:53.829067   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Getting to WaitForSSH function...
	I1209 23:58:53.829080   83900 main.go:141] libmachine: (embed-certs-825613) Waiting for SSH to be available...
	I1209 23:58:53.831079   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.831430   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.831462   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.831630   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Using SSH client type: external
	I1209 23:58:53.831682   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa (-rw-------)
	I1209 23:58:53.831723   83900 main.go:141] libmachine: (embed-certs-825613) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:58:53.831752   83900 main.go:141] libmachine: (embed-certs-825613) DBG | About to run SSH command:
	I1209 23:58:53.831765   83900 main.go:141] libmachine: (embed-certs-825613) DBG | exit 0
	I1209 23:58:53.959446   83900 main.go:141] libmachine: (embed-certs-825613) DBG | SSH cmd err, output: <nil>: 
	I1209 23:58:53.959864   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetConfigRaw
	I1209 23:58:53.960515   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:53.963227   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.963614   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.963644   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.963889   83900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/config.json ...
	I1209 23:58:53.964086   83900 machine.go:93] provisionDockerMachine start ...
	I1209 23:58:53.964103   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:53.964299   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:53.966516   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.966803   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.966834   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.966959   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:53.967149   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:53.967292   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:53.967428   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:53.967599   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:53.967824   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:53.967839   83900 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:58:54.079701   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:58:54.079732   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetMachineName
	I1209 23:58:54.080045   83900 buildroot.go:166] provisioning hostname "embed-certs-825613"
	I1209 23:58:54.080079   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetMachineName
	I1209 23:58:54.080333   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.082930   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.083283   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.083321   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.083420   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.083657   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.083834   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.083974   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.084095   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.084269   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.084282   83900 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-825613 && echo "embed-certs-825613" | sudo tee /etc/hostname
	I1209 23:58:54.208724   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-825613
	
	I1209 23:58:54.208754   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.211739   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.212095   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.212122   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.212297   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.212513   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.212763   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.212936   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.213102   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.213285   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.213311   83900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-825613' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-825613/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-825613' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:58:55.043989   84259 start.go:364] duration metric: took 4m2.194304919s to acquireMachinesLock for "default-k8s-diff-port-871210"
	I1209 23:58:55.044059   84259 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:58:55.044067   84259 fix.go:54] fixHost starting: 
	I1209 23:58:55.044480   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:58:55.044546   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:58:55.060908   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45091
	I1209 23:58:55.061391   84259 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:58:55.061954   84259 main.go:141] libmachine: Using API Version  1
	I1209 23:58:55.061982   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:58:55.062346   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:58:55.062508   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:58:55.062674   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1209 23:58:55.064365   84259 fix.go:112] recreateIfNeeded on default-k8s-diff-port-871210: state=Stopped err=<nil>
	I1209 23:58:55.064386   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	W1209 23:58:55.064564   84259 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:58:55.066707   84259 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-871210" ...
	I1209 23:58:54.331911   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:58:54.331941   83900 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:58:54.331966   83900 buildroot.go:174] setting up certificates
	I1209 23:58:54.331978   83900 provision.go:84] configureAuth start
	I1209 23:58:54.331991   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetMachineName
	I1209 23:58:54.332275   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:54.334597   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.334926   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.334957   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.335117   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.337393   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.337731   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.337763   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.337876   83900 provision.go:143] copyHostCerts
	I1209 23:58:54.337923   83900 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:58:54.337936   83900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:58:54.338006   83900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:58:54.338093   83900 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:58:54.338101   83900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:58:54.338127   83900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:58:54.338204   83900 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:58:54.338212   83900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:58:54.338233   83900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:58:54.338286   83900 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.embed-certs-825613 san=[127.0.0.1 192.168.50.19 embed-certs-825613 localhost minikube]
	I1209 23:58:54.423610   83900 provision.go:177] copyRemoteCerts
	I1209 23:58:54.423679   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:58:54.423706   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.426695   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.427009   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.427041   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.427144   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.427332   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.427516   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.427706   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:54.513991   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 23:58:54.536700   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:58:54.558541   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:58:54.580319   83900 provision.go:87] duration metric: took 248.326924ms to configureAuth
	I1209 23:58:54.580359   83900 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:58:54.580568   83900 config.go:182] Loaded profile config "embed-certs-825613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:58:54.580652   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.583347   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.583720   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.583748   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.583963   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.584171   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.584327   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.584481   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.584621   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.584775   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.584790   83900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:58:54.804814   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:58:54.804860   83900 machine.go:96] duration metric: took 840.759848ms to provisionDockerMachine
	I1209 23:58:54.804876   83900 start.go:293] postStartSetup for "embed-certs-825613" (driver="kvm2")
	I1209 23:58:54.804891   83900 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:58:54.804916   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:54.805269   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:58:54.805297   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.807977   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.808340   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.808370   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.808505   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.808765   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.808945   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.809100   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:54.893741   83900 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:58:54.897936   83900 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:58:54.897961   83900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:58:54.898065   83900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:58:54.898145   83900 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:58:54.898235   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:58:54.907056   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:58:54.929618   83900 start.go:296] duration metric: took 124.726532ms for postStartSetup
	I1209 23:58:54.929664   83900 fix.go:56] duration metric: took 20.445966428s for fixHost
	I1209 23:58:54.929711   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.932476   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.932866   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.932893   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.933108   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.933334   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.933523   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.933638   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.933747   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.933956   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.933967   83900 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:58:55.043841   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788735.012193373
	
	I1209 23:58:55.043863   83900 fix.go:216] guest clock: 1733788735.012193373
	I1209 23:58:55.043870   83900 fix.go:229] Guest: 2024-12-09 23:58:55.012193373 +0000 UTC Remote: 2024-12-09 23:58:54.929689658 +0000 UTC m=+295.728685701 (delta=82.503715ms)
	I1209 23:58:55.043888   83900 fix.go:200] guest clock delta is within tolerance: 82.503715ms
	I1209 23:58:55.043892   83900 start.go:83] releasing machines lock for "embed-certs-825613", held for 20.560223511s
	I1209 23:58:55.043915   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.044198   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:55.046852   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.047246   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:55.047283   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.047446   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.047910   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.048101   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.048198   83900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:58:55.048235   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:55.048424   83900 ssh_runner.go:195] Run: cat /version.json
	I1209 23:58:55.048453   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:55.050905   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051274   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:55.051317   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051339   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051502   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:55.051721   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:55.051772   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:55.051794   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051925   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:55.051980   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:55.052091   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:55.052133   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:55.052263   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:55.052407   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:55.136384   83900 ssh_runner.go:195] Run: systemctl --version
	I1209 23:58:55.154927   83900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:58:55.297639   83900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:58:55.303733   83900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:58:55.303791   83900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:58:55.323858   83900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:58:55.323884   83900 start.go:495] detecting cgroup driver to use...
	I1209 23:58:55.323955   83900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:58:55.342390   83900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:58:55.360400   83900 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:58:55.360463   83900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:58:55.374005   83900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:58:55.387515   83900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:58:55.507866   83900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:58:55.676259   83900 docker.go:233] disabling docker service ...
	I1209 23:58:55.676338   83900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:58:55.695273   83900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:58:55.707909   83900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:58:55.824683   83900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:58:55.934080   83900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:58:55.950700   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:58:55.967756   83900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:58:55.967813   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:55.978028   83900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:58:55.978102   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:55.988960   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:55.999661   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.010253   83900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:58:56.020928   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.030944   83900 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.050251   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.062723   83900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:58:56.072435   83900 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:58:56.072501   83900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:58:56.085332   83900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:58:56.095538   83900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:58:56.214133   83900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:58:56.310023   83900 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:58:56.310107   83900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:58:56.314988   83900 start.go:563] Will wait 60s for crictl version
	I1209 23:58:56.315057   83900 ssh_runner.go:195] Run: which crictl
	I1209 23:58:56.318865   83900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:58:56.356889   83900 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:58:56.356996   83900 ssh_runner.go:195] Run: crio --version
	I1209 23:58:56.387128   83900 ssh_runner.go:195] Run: crio --version
	I1209 23:58:56.417781   83900 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:58:55.068139   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Start
	I1209 23:58:55.068329   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Ensuring networks are active...
	I1209 23:58:55.069026   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Ensuring network default is active
	I1209 23:58:55.069526   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Ensuring network mk-default-k8s-diff-port-871210 is active
	I1209 23:58:55.069970   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Getting domain xml...
	I1209 23:58:55.070725   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Creating domain...
	I1209 23:58:56.366161   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting to get IP...
	I1209 23:58:56.367215   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.367659   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.367720   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:56.367639   85338 retry.go:31] will retry after 212.733452ms: waiting for machine to come up
	I1209 23:58:56.582400   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.582945   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.582973   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:56.582895   85338 retry.go:31] will retry after 380.03081ms: waiting for machine to come up
	I1209 23:58:56.964721   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.965265   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.965296   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:56.965208   85338 retry.go:31] will retry after 429.612511ms: waiting for machine to come up
	I1209 23:58:57.396713   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.397267   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.397298   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:57.397236   85338 retry.go:31] will retry after 595.581233ms: waiting for machine to come up
	I1209 23:58:56.418967   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:56.422317   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:56.422632   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:56.422656   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:56.422925   83900 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1209 23:58:56.426992   83900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:58:56.440436   83900 kubeadm.go:883] updating cluster {Name:embed-certs-825613 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-825613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:58:56.440548   83900 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:58:56.440592   83900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:58:56.475823   83900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:58:56.475907   83900 ssh_runner.go:195] Run: which lz4
	I1209 23:58:56.479747   83900 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:58:56.483850   83900 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:58:56.483900   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 23:58:57.820979   83900 crio.go:462] duration metric: took 1.341265764s to copy over tarball
	I1209 23:58:57.821091   83900 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:58:57.994077   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.994566   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.994590   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:57.994526   85338 retry.go:31] will retry after 728.981009ms: waiting for machine to come up
	I1209 23:58:58.725707   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:58.726209   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:58.726252   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:58.726125   85338 retry.go:31] will retry after 701.836089ms: waiting for machine to come up
	I1209 23:58:59.429804   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:59.430322   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:59.430350   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:59.430266   85338 retry.go:31] will retry after 735.800538ms: waiting for machine to come up
	I1209 23:59:00.167774   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:00.168363   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:00.168397   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:00.168319   85338 retry.go:31] will retry after 1.052511845s: waiting for machine to come up
	I1209 23:59:01.222463   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:01.222960   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:01.222991   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:01.222919   85338 retry.go:31] will retry after 1.655880765s: waiting for machine to come up
	I1209 23:58:59.925304   83900 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.104178296s)
	I1209 23:58:59.925338   83900 crio.go:469] duration metric: took 2.104325305s to extract the tarball
	I1209 23:58:59.925348   83900 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:58:59.962716   83900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:00.008762   83900 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:59:00.008793   83900 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:59:00.008803   83900 kubeadm.go:934] updating node { 192.168.50.19 8443 v1.31.2 crio true true} ...
	I1209 23:59:00.008929   83900 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-825613 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-825613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:59:00.009008   83900 ssh_runner.go:195] Run: crio config
	I1209 23:59:00.050777   83900 cni.go:84] Creating CNI manager for ""
	I1209 23:59:00.050801   83900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:00.050813   83900 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:59:00.050842   83900 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.19 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-825613 NodeName:embed-certs-825613 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:59:00.051001   83900 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-825613"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.19"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.19"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:59:00.051083   83900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:59:00.062278   83900 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:59:00.062354   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:59:00.073157   83900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1209 23:59:00.088933   83900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:59:00.104358   83900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1209 23:59:00.122425   83900 ssh_runner.go:195] Run: grep 192.168.50.19	control-plane.minikube.internal$ /etc/hosts
	I1209 23:59:00.126224   83900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:00.138974   83900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:00.252489   83900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:00.269127   83900 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613 for IP: 192.168.50.19
	I1209 23:59:00.269155   83900 certs.go:194] generating shared ca certs ...
	I1209 23:59:00.269177   83900 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:00.269359   83900 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:59:00.269406   83900 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:59:00.269421   83900 certs.go:256] generating profile certs ...
	I1209 23:59:00.269551   83900 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/client.key
	I1209 23:59:00.269651   83900 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/apiserver.key.12f830e7
	I1209 23:59:00.269724   83900 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/proxy-client.key
	I1209 23:59:00.269901   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:59:00.269954   83900 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:59:00.269968   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:59:00.270007   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:59:00.270056   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:59:00.270096   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:59:00.270161   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:00.271078   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:59:00.322392   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:59:00.351665   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:59:00.378615   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:59:00.401622   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1209 23:59:00.430280   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 23:59:00.453489   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:59:00.478368   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:59:00.501404   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:59:00.524273   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:59:00.546132   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:59:00.568553   83900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:59:00.584196   83900 ssh_runner.go:195] Run: openssl version
	I1209 23:59:00.589905   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:59:00.600107   83900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:59:00.604436   83900 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:59:00.604504   83900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:59:00.609960   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:59:00.620129   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:59:00.629895   83900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:00.633993   83900 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:00.634053   83900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:00.639538   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:59:00.649781   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:59:00.659593   83900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:59:00.663789   83900 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:59:00.663832   83900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:59:00.669033   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:59:00.678881   83900 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:59:00.683006   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:59:00.688631   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:59:00.694212   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:59:00.699930   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:59:00.705400   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:59:00.710668   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:59:00.715982   83900 kubeadm.go:392] StartCluster: {Name:embed-certs-825613 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-825613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:00.716059   83900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:59:00.716094   83900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:00.758093   83900 cri.go:89] found id: ""
	I1209 23:59:00.758164   83900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:59:00.767877   83900 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:59:00.767895   83900 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:59:00.767932   83900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:59:00.777172   83900 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:59:00.778578   83900 kubeconfig.go:125] found "embed-certs-825613" server: "https://192.168.50.19:8443"
	I1209 23:59:00.780691   83900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:59:00.790459   83900 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.19
	I1209 23:59:00.790492   83900 kubeadm.go:1160] stopping kube-system containers ...
	I1209 23:59:00.790507   83900 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 23:59:00.790563   83900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:00.831048   83900 cri.go:89] found id: ""
	I1209 23:59:00.831129   83900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 23:59:00.847797   83900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:59:00.857819   83900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:59:00.857841   83900 kubeadm.go:157] found existing configuration files:
	
	I1209 23:59:00.857877   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:59:00.867108   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:59:00.867159   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:59:00.876742   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:59:00.885861   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:59:00.885942   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:59:00.895651   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:59:00.904027   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:59:00.904092   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:59:00.912956   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:59:00.921647   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:59:00.921704   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:59:00.930445   83900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:59:00.939496   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:01.049561   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.012953   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.234060   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.302650   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.378572   83900 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:59:02.378677   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:02.878755   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:03.379050   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:03.879215   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:02.880033   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:02.880532   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:02.880560   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:02.880488   85338 retry.go:31] will retry after 2.112793708s: waiting for machine to come up
	I1209 23:59:04.994713   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:04.995223   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:04.995254   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:04.995183   85338 retry.go:31] will retry after 2.420394988s: waiting for machine to come up
	I1209 23:59:07.416846   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:07.417309   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:07.417338   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:07.417281   85338 retry.go:31] will retry after 2.479420656s: waiting for machine to come up
	I1209 23:59:04.379735   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:04.406446   83900 api_server.go:72] duration metric: took 2.027872494s to wait for apiserver process to appear ...
	I1209 23:59:04.406480   83900 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:59:04.406506   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:04.407056   83900 api_server.go:269] stopped: https://192.168.50.19:8443/healthz: Get "https://192.168.50.19:8443/healthz": dial tcp 192.168.50.19:8443: connect: connection refused
	I1209 23:59:04.907381   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:06.967755   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:06.967791   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:06.967808   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:07.003261   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:07.003295   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:07.406692   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:07.411139   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:07.411168   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:07.906689   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:07.911136   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:07.911176   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:08.406697   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:08.411051   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 200:
	ok
	I1209 23:59:08.422472   83900 api_server.go:141] control plane version: v1.31.2
	I1209 23:59:08.422506   83900 api_server.go:131] duration metric: took 4.01601823s to wait for apiserver health ...
	I1209 23:59:08.422532   83900 cni.go:84] Creating CNI manager for ""
	I1209 23:59:08.422541   83900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:08.424238   83900 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 23:59:08.425539   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 23:59:08.435424   83900 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 23:59:08.451320   83900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:59:08.461244   83900 system_pods.go:59] 8 kube-system pods found
	I1209 23:59:08.461285   83900 system_pods.go:61] "coredns-7c65d6cfc9-qvtlr" [1ef5707d-5f06-46d0-809b-da79f79c49e5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 23:59:08.461296   83900 system_pods.go:61] "etcd-embed-certs-825613" [3213929f-0200-4e7d-9b69-967463bfc194] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 23:59:08.461305   83900 system_pods.go:61] "kube-apiserver-embed-certs-825613" [d64b8c55-7da1-4b8e-864e-0727676b17a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 23:59:08.461313   83900 system_pods.go:61] "kube-controller-manager-embed-certs-825613" [61372146-3fe1-4f4e-9d8e-d85cc5ac8369] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 23:59:08.461322   83900 system_pods.go:61] "kube-proxy-rn6fg" [6db02558-bfa6-4c5f-a120-aed13575b273] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 23:59:08.461329   83900 system_pods.go:61] "kube-scheduler-embed-certs-825613" [b9c75ecd-add3-4a19-86e0-08326eea0f6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 23:59:08.461346   83900 system_pods.go:61] "metrics-server-6867b74b74-hg7c5" [2a657b1b-4435-42b5-aef2-deebf7865c83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 23:59:08.461358   83900 system_pods.go:61] "storage-provisioner" [e5cabae9-bb71-4b5e-9a43-0dce4a0733a1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 23:59:08.461368   83900 system_pods.go:74] duration metric: took 10.019045ms to wait for pod list to return data ...
	I1209 23:59:08.461381   83900 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:59:08.464945   83900 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 23:59:08.464973   83900 node_conditions.go:123] node cpu capacity is 2
	I1209 23:59:08.464986   83900 node_conditions.go:105] duration metric: took 3.600013ms to run NodePressure ...
	I1209 23:59:08.465011   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:08.762283   83900 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 23:59:08.766618   83900 kubeadm.go:739] kubelet initialised
	I1209 23:59:08.766640   83900 kubeadm.go:740] duration metric: took 4.332483ms waiting for restarted kubelet to initialise ...
	I1209 23:59:08.766648   83900 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:08.771039   83900 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.775795   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.775820   83900 pod_ready.go:82] duration metric: took 4.756744ms for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.775829   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.775836   83900 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.780473   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "etcd-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.780498   83900 pod_ready.go:82] duration metric: took 4.651756ms for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.780507   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "etcd-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.780517   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.785226   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.785252   83900 pod_ready.go:82] duration metric: took 4.725948ms for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.785261   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.785268   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.855086   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.855117   83900 pod_ready.go:82] duration metric: took 69.839948ms for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.855129   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.855141   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:09.255415   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-proxy-rn6fg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.255451   83900 pod_ready.go:82] duration metric: took 400.29383ms for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:09.255461   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-proxy-rn6fg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.255467   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:09.654750   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.654775   83900 pod_ready.go:82] duration metric: took 399.301549ms for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:09.654785   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.654792   83900 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:10.054952   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:10.054979   83900 pod_ready.go:82] duration metric: took 400.177675ms for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:10.054995   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:10.055002   83900 pod_ready.go:39] duration metric: took 1.288346997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:10.055019   83900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 23:59:10.066533   83900 ops.go:34] apiserver oom_adj: -16
	I1209 23:59:10.066559   83900 kubeadm.go:597] duration metric: took 9.298658158s to restartPrimaryControlPlane
	I1209 23:59:10.066570   83900 kubeadm.go:394] duration metric: took 9.350595042s to StartCluster
	I1209 23:59:10.066590   83900 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:10.066674   83900 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:59:10.068469   83900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:10.068732   83900 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:59:10.068795   83900 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 23:59:10.068901   83900 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-825613"
	I1209 23:59:10.068925   83900 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-825613"
	W1209 23:59:10.068932   83900 addons.go:243] addon storage-provisioner should already be in state true
	I1209 23:59:10.068966   83900 config.go:182] Loaded profile config "embed-certs-825613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:59:10.068960   83900 addons.go:69] Setting default-storageclass=true in profile "embed-certs-825613"
	I1209 23:59:10.068969   83900 host.go:66] Checking if "embed-certs-825613" exists ...
	I1209 23:59:10.069011   83900 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-825613"
	I1209 23:59:10.068972   83900 addons.go:69] Setting metrics-server=true in profile "embed-certs-825613"
	I1209 23:59:10.069584   83900 addons.go:234] Setting addon metrics-server=true in "embed-certs-825613"
	W1209 23:59:10.069616   83900 addons.go:243] addon metrics-server should already be in state true
	I1209 23:59:10.069644   83900 host.go:66] Checking if "embed-certs-825613" exists ...
	I1209 23:59:10.070176   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.070220   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.070219   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.070255   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.070947   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.071024   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.071039   83900 out.go:177] * Verifying Kubernetes components...
	I1209 23:59:10.073122   83900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:10.085793   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I1209 23:59:10.086012   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I1209 23:59:10.086282   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.086412   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.086794   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.086823   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.086907   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.086930   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.087156   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38521
	I1209 23:59:10.087160   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.087251   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.087469   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.087617   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.087792   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.087828   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.088155   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.088177   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.088692   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.089323   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.089358   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.090339   83900 addons.go:234] Setting addon default-storageclass=true in "embed-certs-825613"
	W1209 23:59:10.090357   83900 addons.go:243] addon default-storageclass should already be in state true
	I1209 23:59:10.090379   83900 host.go:66] Checking if "embed-certs-825613" exists ...
	I1209 23:59:10.090609   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.090639   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.103251   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44103
	I1209 23:59:10.103791   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.104305   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.104325   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.104376   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37713
	I1209 23:59:10.104560   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
	I1209 23:59:10.104713   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.104736   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.104848   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.104854   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.105269   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.105285   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.105393   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.105414   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.105595   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.105751   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.105784   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.106203   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.106235   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.107268   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:59:10.107710   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:59:10.109781   83900 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:10.109863   83900 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 23:59:10.111198   83900 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:59:10.111210   83900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 23:59:10.111224   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:59:10.111294   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 23:59:10.111300   83900 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 23:59:10.111309   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:59:10.114610   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.114929   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.114962   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:59:10.114980   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.115159   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:59:10.115318   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:59:10.115438   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:59:10.115474   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:59:10.115516   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.115599   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:59:10.115704   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:59:10.115844   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:59:10.115944   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:59:10.116149   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:59:10.123450   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1209 23:59:10.123926   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.124406   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.124445   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.124744   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.124936   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.126437   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:59:10.126619   83900 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 23:59:10.126637   83900 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 23:59:10.126656   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:59:10.129218   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.129663   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:59:10.129695   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.129874   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:59:10.130055   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:59:10.130164   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:59:10.130296   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:59:10.272711   83900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:10.290121   83900 node_ready.go:35] waiting up to 6m0s for node "embed-certs-825613" to be "Ready" ...
	I1209 23:59:10.379383   83900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:59:10.397672   83900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 23:59:10.403543   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 23:59:10.403591   83900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 23:59:10.439890   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 23:59:10.439915   83900 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 23:59:10.482300   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:59:10.482331   83900 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 23:59:10.550923   83900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:59:11.613572   83900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.234149831s)
	I1209 23:59:11.613622   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.613634   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.613655   83900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.21594498s)
	I1209 23:59:11.613701   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.613713   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.613929   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.613976   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.613992   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.613979   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.614006   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.614018   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.614032   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.614052   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.614000   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.614085   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.614332   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.614343   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.614343   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.614361   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.614362   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.614372   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.621731   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.621749   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.622001   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.622017   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.664217   83900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113253318s)
	I1209 23:59:11.664273   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.664288   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.664633   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.664637   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.664649   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.664658   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.664676   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.664875   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.664888   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.664903   83900 addons.go:475] Verifying addon metrics-server=true in "embed-certs-825613"
	I1209 23:59:11.666814   83900 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1209 23:59:09.899886   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:09.900284   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:09.900314   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:09.900262   85338 retry.go:31] will retry after 3.641244983s: waiting for machine to come up
	I1209 23:59:11.668002   83900 addons.go:510] duration metric: took 1.599215886s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1209 23:59:12.293475   83900 node_ready.go:53] node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:14.960350   84547 start.go:364] duration metric: took 3m49.343321897s to acquireMachinesLock for "old-k8s-version-720064"
	I1209 23:59:14.960428   84547 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:59:14.960440   84547 fix.go:54] fixHost starting: 
	I1209 23:59:14.960886   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:14.960950   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:14.981976   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37057
	I1209 23:59:14.982425   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:14.982933   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:59:14.982966   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:14.983341   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:14.983587   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:14.983772   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetState
	I1209 23:59:14.985748   84547 fix.go:112] recreateIfNeeded on old-k8s-version-720064: state=Stopped err=<nil>
	I1209 23:59:14.985774   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	W1209 23:59:14.985968   84547 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:59:14.987652   84547 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-720064" ...
	I1209 23:59:14.988869   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .Start
	I1209 23:59:14.989086   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring networks are active...
	I1209 23:59:14.989817   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring network default is active
	I1209 23:59:14.990188   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring network mk-old-k8s-version-720064 is active
	I1209 23:59:14.990640   84547 main.go:141] libmachine: (old-k8s-version-720064) Getting domain xml...
	I1209 23:59:14.991392   84547 main.go:141] libmachine: (old-k8s-version-720064) Creating domain...
	I1209 23:59:13.544971   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.545381   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Found IP for machine: 192.168.72.54
	I1209 23:59:13.545408   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has current primary IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.545418   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Reserving static IP address...
	I1209 23:59:13.545913   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-871210", mac: "52:54:00:5e:5b:a3", ip: "192.168.72.54"} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.545942   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | skip adding static IP to network mk-default-k8s-diff-port-871210 - found existing host DHCP lease matching {name: "default-k8s-diff-port-871210", mac: "52:54:00:5e:5b:a3", ip: "192.168.72.54"}
	I1209 23:59:13.545957   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Reserved static IP address: 192.168.72.54
	I1209 23:59:13.545973   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for SSH to be available...
	I1209 23:59:13.545988   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Getting to WaitForSSH function...
	I1209 23:59:13.548279   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.548641   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.548700   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.548788   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Using SSH client type: external
	I1209 23:59:13.548812   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa (-rw-------)
	I1209 23:59:13.548882   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:59:13.548908   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | About to run SSH command:
	I1209 23:59:13.548923   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | exit 0
	I1209 23:59:13.687704   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | SSH cmd err, output: <nil>: 
	I1209 23:59:13.688021   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetConfigRaw
	I1209 23:59:13.688673   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:13.690853   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.691187   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.691224   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.691421   84259 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/config.json ...
	I1209 23:59:13.691646   84259 machine.go:93] provisionDockerMachine start ...
	I1209 23:59:13.691665   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:13.691901   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:13.693958   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.694230   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.694257   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.694392   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:13.694602   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.694771   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.694913   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:13.695093   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:13.695278   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:13.695288   84259 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:59:13.803602   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:59:13.803629   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetMachineName
	I1209 23:59:13.803904   84259 buildroot.go:166] provisioning hostname "default-k8s-diff-port-871210"
	I1209 23:59:13.803928   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetMachineName
	I1209 23:59:13.804106   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:13.806369   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.806769   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.806800   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.806906   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:13.807053   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.807222   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.807329   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:13.807551   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:13.807751   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:13.807765   84259 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-871210 && echo "default-k8s-diff-port-871210" | sudo tee /etc/hostname
	I1209 23:59:13.932849   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-871210
	
	I1209 23:59:13.932874   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:13.935573   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.935943   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.935972   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.936143   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:13.936388   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.936546   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.936682   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:13.936840   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:13.937016   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:13.937046   84259 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-871210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-871210/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-871210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:59:14.057493   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:59:14.057526   84259 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:59:14.057565   84259 buildroot.go:174] setting up certificates
	I1209 23:59:14.057575   84259 provision.go:84] configureAuth start
	I1209 23:59:14.057586   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetMachineName
	I1209 23:59:14.057860   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:14.060619   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.060947   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.060973   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.061170   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.063591   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.063919   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.063946   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.064133   84259 provision.go:143] copyHostCerts
	I1209 23:59:14.064180   84259 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:59:14.064189   84259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:59:14.064242   84259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:59:14.064333   84259 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:59:14.064341   84259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:59:14.064364   84259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:59:14.064416   84259 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:59:14.064424   84259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:59:14.064442   84259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:59:14.064488   84259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-871210 san=[127.0.0.1 192.168.72.54 default-k8s-diff-port-871210 localhost minikube]
	I1209 23:59:14.332090   84259 provision.go:177] copyRemoteCerts
	I1209 23:59:14.332148   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:59:14.332174   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.334856   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.335275   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.335309   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.335428   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.335647   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.335860   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.336002   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:14.422641   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:59:14.445943   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1209 23:59:14.468188   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:59:14.490678   84259 provision.go:87] duration metric: took 433.06125ms to configureAuth
	I1209 23:59:14.490710   84259 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:59:14.490883   84259 config.go:182] Loaded profile config "default-k8s-diff-port-871210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:59:14.490953   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.493529   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.493858   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.493879   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.494073   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.494276   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.494435   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.494555   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.494716   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:14.494880   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:14.494899   84259 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:59:14.720424   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:59:14.720452   84259 machine.go:96] duration metric: took 1.028790944s to provisionDockerMachine
	I1209 23:59:14.720468   84259 start.go:293] postStartSetup for "default-k8s-diff-port-871210" (driver="kvm2")
	I1209 23:59:14.720480   84259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:59:14.720496   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.720814   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:59:14.720844   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.723417   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.723817   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.723851   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.723992   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.724171   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.724328   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.724453   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:14.810295   84259 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:59:14.814400   84259 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:59:14.814424   84259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:59:14.814501   84259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:59:14.814595   84259 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:59:14.814706   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:59:14.823765   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:14.847185   84259 start.go:296] duration metric: took 126.701807ms for postStartSetup
	I1209 23:59:14.847226   84259 fix.go:56] duration metric: took 19.803160459s for fixHost
	I1209 23:59:14.847245   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.850067   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.850488   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.850516   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.850697   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.850915   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.851082   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.851218   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.851369   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:14.851558   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:14.851588   84259 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:59:14.960167   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788754.933814431
	
	I1209 23:59:14.960190   84259 fix.go:216] guest clock: 1733788754.933814431
	I1209 23:59:14.960198   84259 fix.go:229] Guest: 2024-12-09 23:59:14.933814431 +0000 UTC Remote: 2024-12-09 23:59:14.847230309 +0000 UTC m=+262.142902203 (delta=86.584122ms)
	I1209 23:59:14.960217   84259 fix.go:200] guest clock delta is within tolerance: 86.584122ms
	I1209 23:59:14.960222   84259 start.go:83] releasing machines lock for "default-k8s-diff-port-871210", held for 19.916185392s
	I1209 23:59:14.960244   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.960544   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:14.963243   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.963653   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.963693   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.963820   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.964285   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.964505   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.964612   84259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:59:14.964689   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.964749   84259 ssh_runner.go:195] Run: cat /version.json
	I1209 23:59:14.964772   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.967430   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.967811   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.967844   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.967871   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.968018   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.968194   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.968388   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.968405   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.968427   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.968563   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.968562   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:14.968706   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.968878   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.969038   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:15.072098   84259 ssh_runner.go:195] Run: systemctl --version
	I1209 23:59:15.077846   84259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:59:15.222108   84259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:59:15.230013   84259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:59:15.230097   84259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:59:15.247065   84259 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:59:15.247091   84259 start.go:495] detecting cgroup driver to use...
	I1209 23:59:15.247168   84259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:59:15.268262   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:59:15.283824   84259 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:59:15.283893   84259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:59:15.299384   84259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:59:15.318727   84259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:59:15.457157   84259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:59:15.642629   84259 docker.go:233] disabling docker service ...
	I1209 23:59:15.642718   84259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:59:15.661646   84259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:59:15.681887   84259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:59:15.838344   84259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:59:15.971028   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:59:15.984896   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:59:16.006631   84259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:59:16.006691   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.019058   84259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:59:16.019143   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.031838   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.043176   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.057386   84259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:59:16.068694   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.083326   84259 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.102288   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.113538   84259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:59:16.126450   84259 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:59:16.126516   84259 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:59:16.147057   84259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:59:16.157329   84259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:16.285726   84259 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:59:16.385931   84259 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:59:16.386014   84259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:59:16.390840   84259 start.go:563] Will wait 60s for crictl version
	I1209 23:59:16.390912   84259 ssh_runner.go:195] Run: which crictl
	I1209 23:59:16.394870   84259 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:59:16.438893   84259 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:59:16.438986   84259 ssh_runner.go:195] Run: crio --version
	I1209 23:59:16.470118   84259 ssh_runner.go:195] Run: crio --version
	I1209 23:59:16.499603   84259 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:59:16.500665   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:16.503766   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:16.504242   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:16.504329   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:16.504609   84259 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1209 23:59:16.508742   84259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:16.520816   84259 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-871210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-871210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:59:16.520944   84259 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:59:16.520988   84259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:16.563220   84259 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:59:16.563315   84259 ssh_runner.go:195] Run: which lz4
	I1209 23:59:16.568962   84259 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:59:16.573715   84259 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:59:16.573756   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 23:59:14.294886   83900 node_ready.go:53] node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:16.796597   83900 node_ready.go:53] node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:17.795347   83900 node_ready.go:49] node "embed-certs-825613" has status "Ready":"True"
	I1209 23:59:17.795381   83900 node_ready.go:38] duration metric: took 7.505214814s for node "embed-certs-825613" to be "Ready" ...
	I1209 23:59:17.795394   83900 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:17.801650   83900 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:17.808087   83900 pod_ready.go:93] pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:17.808113   83900 pod_ready.go:82] duration metric: took 6.437717ms for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:17.808127   83900 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:16.350842   84547 main.go:141] libmachine: (old-k8s-version-720064) Waiting to get IP...
	I1209 23:59:16.351883   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.352326   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.352393   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.352304   85519 retry.go:31] will retry after 268.149849ms: waiting for machine to come up
	I1209 23:59:16.621742   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.622116   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.622145   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.622066   85519 retry.go:31] will retry after 365.051996ms: waiting for machine to come up
	I1209 23:59:16.988590   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.989124   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.989154   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.989089   85519 retry.go:31] will retry after 441.697933ms: waiting for machine to come up
	I1209 23:59:17.432962   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:17.433453   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:17.433482   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:17.433356   85519 retry.go:31] will retry after 503.173846ms: waiting for machine to come up
	I1209 23:59:17.938107   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:17.938576   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:17.938610   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:17.938533   85519 retry.go:31] will retry after 476.993358ms: waiting for machine to come up
	I1209 23:59:18.417462   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:18.418037   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:18.418064   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:18.417989   85519 retry.go:31] will retry after 732.449849ms: waiting for machine to come up
	I1209 23:59:19.152120   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:19.152680   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:19.152708   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:19.152624   85519 retry.go:31] will retry after 764.17794ms: waiting for machine to come up
	I1209 23:59:19.918630   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:19.919113   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:19.919141   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:19.919066   85519 retry.go:31] will retry after 1.072352346s: waiting for machine to come up
	I1209 23:59:17.907764   84259 crio.go:462] duration metric: took 1.338836821s to copy over tarball
	I1209 23:59:17.907846   84259 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:59:20.133171   84259 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.22529043s)
	I1209 23:59:20.133214   84259 crio.go:469] duration metric: took 2.225420822s to extract the tarball
	I1209 23:59:20.133224   84259 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:59:20.169786   84259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:20.211990   84259 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:59:20.212020   84259 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:59:20.212030   84259 kubeadm.go:934] updating node { 192.168.72.54 8444 v1.31.2 crio true true} ...
	I1209 23:59:20.212166   84259 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-871210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-871210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:59:20.212247   84259 ssh_runner.go:195] Run: crio config
	I1209 23:59:20.255141   84259 cni.go:84] Creating CNI manager for ""
	I1209 23:59:20.255169   84259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:20.255182   84259 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:59:20.255215   84259 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.54 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-871210 NodeName:default-k8s-diff-port-871210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:59:20.255338   84259 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.54
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-871210"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.54"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.54"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:59:20.255407   84259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:59:20.265160   84259 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:59:20.265244   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:59:20.275799   84259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1209 23:59:20.292089   84259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:59:20.308054   84259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1209 23:59:20.327464   84259 ssh_runner.go:195] Run: grep 192.168.72.54	control-plane.minikube.internal$ /etc/hosts
	I1209 23:59:20.331101   84259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:20.343232   84259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:20.473225   84259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:20.492069   84259 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210 for IP: 192.168.72.54
	I1209 23:59:20.492098   84259 certs.go:194] generating shared ca certs ...
	I1209 23:59:20.492119   84259 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:20.492311   84259 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:59:20.492363   84259 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:59:20.492378   84259 certs.go:256] generating profile certs ...
	I1209 23:59:20.492499   84259 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/client.key
	I1209 23:59:20.492605   84259 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/apiserver.key.9cc284f2
	I1209 23:59:20.492663   84259 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/proxy-client.key
	I1209 23:59:20.492831   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:59:20.492872   84259 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:59:20.492886   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:59:20.492918   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:59:20.492951   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:59:20.492997   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:59:20.493071   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:20.494023   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:59:20.543769   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:59:20.583697   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:59:20.615170   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:59:20.638784   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1209 23:59:20.673708   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 23:59:20.696690   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:59:20.719682   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:59:20.744292   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:59:20.767643   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:59:20.790927   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:59:20.816746   84259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:59:20.834150   84259 ssh_runner.go:195] Run: openssl version
	I1209 23:59:20.840153   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:59:20.851006   84259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:59:20.855430   84259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:59:20.855492   84259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:59:20.860866   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:59:20.871983   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:59:20.882642   84259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:20.886978   84259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:20.887050   84259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:20.892472   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:59:20.902959   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:59:20.913774   84259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:59:20.918344   84259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:59:20.918394   84259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:59:20.923981   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:59:20.934931   84259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:59:20.939662   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:59:20.945633   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:59:20.951650   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:59:20.957628   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:59:20.963600   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:59:20.969399   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:59:20.974992   84259 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-871210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-871210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:20.975088   84259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:59:20.975143   84259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:21.012553   84259 cri.go:89] found id: ""
	I1209 23:59:21.012647   84259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:59:21.023728   84259 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:59:21.023758   84259 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:59:21.023808   84259 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:59:21.033788   84259 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:59:21.034806   84259 kubeconfig.go:125] found "default-k8s-diff-port-871210" server: "https://192.168.72.54:8444"
	I1209 23:59:21.036852   84259 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:59:21.046251   84259 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.54
	I1209 23:59:21.046280   84259 kubeadm.go:1160] stopping kube-system containers ...
	I1209 23:59:21.046292   84259 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 23:59:21.046354   84259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:21.084915   84259 cri.go:89] found id: ""
	I1209 23:59:21.085000   84259 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 23:59:21.100592   84259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:59:21.111180   84259 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:59:21.111204   84259 kubeadm.go:157] found existing configuration files:
	
	I1209 23:59:21.111254   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1209 23:59:21.120027   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:59:21.120087   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:59:21.129165   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1209 23:59:21.138254   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:59:21.138315   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:59:21.147276   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1209 23:59:21.155635   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:59:21.155711   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:59:21.164794   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1209 23:59:21.173421   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:59:21.173477   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:59:21.182615   84259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:59:21.191795   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:21.289295   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.415644   84259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.126305379s)
	I1209 23:59:22.415695   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.616211   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.672571   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.731282   84259 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:59:22.731363   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:19.816966   83900 pod_ready.go:103] pod "etcd-embed-certs-825613" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:21.814625   83900 pod_ready.go:93] pod "etcd-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.814650   83900 pod_ready.go:82] duration metric: took 4.006513871s for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.814662   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.819611   83900 pod_ready.go:93] pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.819634   83900 pod_ready.go:82] duration metric: took 4.964081ms for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.819647   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.823994   83900 pod_ready.go:93] pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.824016   83900 pod_ready.go:82] duration metric: took 4.360902ms for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.824028   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.829102   83900 pod_ready.go:93] pod "kube-proxy-rn6fg" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.829128   83900 pod_ready.go:82] duration metric: took 5.090595ms for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.829161   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:23.983548   83900 pod_ready.go:93] pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:23.983603   83900 pod_ready.go:82] duration metric: took 2.154427639s for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:23.983620   83900 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:20.992747   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:20.993138   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:20.993196   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:20.993111   85519 retry.go:31] will retry after 1.479639772s: waiting for machine to come up
	I1209 23:59:22.474889   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:22.475414   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:22.475445   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:22.475364   85519 retry.go:31] will retry after 2.248463233s: waiting for machine to come up
	I1209 23:59:24.725307   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:24.725765   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:24.725796   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:24.725706   85519 retry.go:31] will retry after 2.803512929s: waiting for machine to come up
	I1209 23:59:23.231575   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:23.731704   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:24.231740   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:24.265107   84259 api_server.go:72] duration metric: took 1.533824471s to wait for apiserver process to appear ...
	I1209 23:59:24.265144   84259 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:59:24.265173   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:24.265664   84259 api_server.go:269] stopped: https://192.168.72.54:8444/healthz: Get "https://192.168.72.54:8444/healthz": dial tcp 192.168.72.54:8444: connect: connection refused
	I1209 23:59:24.765571   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.251990   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:27.252018   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:27.252033   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.284733   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:27.284769   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:27.284781   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.313967   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:27.313998   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:25.990720   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:28.490332   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:27.530567   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:27.531086   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:27.531120   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:27.531022   85519 retry.go:31] will retry after 2.800227475s: waiting for machine to come up
	I1209 23:59:30.333201   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:30.333660   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:30.333684   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:30.333621   85519 retry.go:31] will retry after 3.041733113s: waiting for machine to come up
	I1209 23:59:27.765806   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.773528   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:27.773564   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:28.266012   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:28.275940   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:28.275965   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:28.765502   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:28.770695   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:28.770725   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:29.265309   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:29.270844   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:29.270883   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:29.765323   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:29.769949   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:29.769974   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:30.265981   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:30.270284   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:30.270313   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:30.765915   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:30.771542   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I1209 23:59:30.781357   84259 api_server.go:141] control plane version: v1.31.2
	I1209 23:59:30.781390   84259 api_server.go:131] duration metric: took 6.516238077s to wait for apiserver health ...
	I1209 23:59:30.781400   84259 cni.go:84] Creating CNI manager for ""
	I1209 23:59:30.781409   84259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:30.783438   84259 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 23:59:30.784794   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 23:59:30.794916   84259 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 23:59:30.812099   84259 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:59:30.822915   84259 system_pods.go:59] 8 kube-system pods found
	I1209 23:59:30.822967   84259 system_pods.go:61] "coredns-7c65d6cfc9-wclgl" [22b2e24e-5a03-4d4f-a071-68e414aaf6cf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 23:59:30.822980   84259 system_pods.go:61] "etcd-default-k8s-diff-port-871210" [2058a33f-dd4b-42bc-94d9-4cd130b9389c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 23:59:30.822990   84259 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-871210" [26ffd01d-48b5-4e38-a91b-bf75dd75e3c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 23:59:30.823000   84259 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-871210" [cea3a810-c7e9-48d6-9fd7-1f6258c45387] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 23:59:30.823006   84259 system_pods.go:61] "kube-proxy-d7sxm" [e28ccb5f-a282-4371-ae1a-fca52dd58616] Running
	I1209 23:59:30.823021   84259 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-871210" [790619f6-29a2-4bbc-89ba-a7b93653459b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 23:59:30.823033   84259 system_pods.go:61] "metrics-server-6867b74b74-lgzdz" [2c251249-58b6-44c4-929a-0f5c963d83b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 23:59:30.823039   84259 system_pods.go:61] "storage-provisioner" [63078ee5-81b8-4548-88bf-2b89857ea01a] Running
	I1209 23:59:30.823050   84259 system_pods.go:74] duration metric: took 10.914419ms to wait for pod list to return data ...
	I1209 23:59:30.823063   84259 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:59:30.828012   84259 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 23:59:30.828062   84259 node_conditions.go:123] node cpu capacity is 2
	I1209 23:59:30.828076   84259 node_conditions.go:105] duration metric: took 5.004481ms to run NodePressure ...
	I1209 23:59:30.828097   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:31.092839   84259 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 23:59:31.097588   84259 kubeadm.go:739] kubelet initialised
	I1209 23:59:31.097613   84259 kubeadm.go:740] duration metric: took 4.744115ms waiting for restarted kubelet to initialise ...
	I1209 23:59:31.097623   84259 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:31.104318   84259 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:30.991511   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:33.490390   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:34.824186   83859 start.go:364] duration metric: took 55.338760885s to acquireMachinesLock for "no-preload-048296"
	I1209 23:59:34.824245   83859 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:59:34.824272   83859 fix.go:54] fixHost starting: 
	I1209 23:59:34.824660   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:34.824713   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:34.842421   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35591
	I1209 23:59:34.842851   83859 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:34.843297   83859 main.go:141] libmachine: Using API Version  1
	I1209 23:59:34.843319   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:34.843701   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:34.843916   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:34.844066   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1209 23:59:34.845768   83859 fix.go:112] recreateIfNeeded on no-preload-048296: state=Stopped err=<nil>
	I1209 23:59:34.845792   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	W1209 23:59:34.845958   83859 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:59:34.847617   83859 out.go:177] * Restarting existing kvm2 VM for "no-preload-048296" ...
	I1209 23:59:33.378848   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.379297   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has current primary IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.379326   84547 main.go:141] libmachine: (old-k8s-version-720064) Found IP for machine: 192.168.39.188
	I1209 23:59:33.379351   84547 main.go:141] libmachine: (old-k8s-version-720064) Reserving static IP address...
	I1209 23:59:33.379701   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "old-k8s-version-720064", mac: "52:54:00:a1:91:00", ip: "192.168.39.188"} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.379722   84547 main.go:141] libmachine: (old-k8s-version-720064) Reserved static IP address: 192.168.39.188
	I1209 23:59:33.379743   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | skip adding static IP to network mk-old-k8s-version-720064 - found existing host DHCP lease matching {name: "old-k8s-version-720064", mac: "52:54:00:a1:91:00", ip: "192.168.39.188"}
	I1209 23:59:33.379759   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Getting to WaitForSSH function...
	I1209 23:59:33.379776   84547 main.go:141] libmachine: (old-k8s-version-720064) Waiting for SSH to be available...
	I1209 23:59:33.381697   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.381990   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.382027   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.382137   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Using SSH client type: external
	I1209 23:59:33.382163   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa (-rw-------)
	I1209 23:59:33.382193   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:59:33.382212   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | About to run SSH command:
	I1209 23:59:33.382229   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | exit 0
	I1209 23:59:33.507653   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | SSH cmd err, output: <nil>: 
	I1209 23:59:33.507957   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetConfigRaw
	I1209 23:59:33.508555   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:33.511020   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.511399   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.511431   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.511695   84547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/config.json ...
	I1209 23:59:33.511932   84547 machine.go:93] provisionDockerMachine start ...
	I1209 23:59:33.511958   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:33.512144   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.514383   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.514722   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.514743   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.514876   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.515048   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.515187   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.515349   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.515596   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.515835   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.515850   84547 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:59:33.623370   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:59:33.623407   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.623670   84547 buildroot.go:166] provisioning hostname "old-k8s-version-720064"
	I1209 23:59:33.623708   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.623909   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.626370   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.626749   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.626781   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.626943   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.627172   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.627348   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.627490   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.627666   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.627868   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.627884   84547 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-720064 && echo "old-k8s-version-720064" | sudo tee /etc/hostname
	I1209 23:59:33.749338   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-720064
	
	I1209 23:59:33.749370   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.752401   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.752774   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.752807   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.753000   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.753180   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.753372   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.753531   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.753674   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.753837   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.753853   84547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-720064' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-720064/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-720064' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:59:33.872512   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:59:33.872548   84547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:59:33.872575   84547 buildroot.go:174] setting up certificates
	I1209 23:59:33.872585   84547 provision.go:84] configureAuth start
	I1209 23:59:33.872597   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.872906   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:33.875719   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.876012   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.876051   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.876210   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.878539   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.878857   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.878894   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.879025   84547 provision.go:143] copyHostCerts
	I1209 23:59:33.879087   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:59:33.879100   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:59:33.879166   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:59:33.879254   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:59:33.879262   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:59:33.879283   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:59:33.879338   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:59:33.879346   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:59:33.879365   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:59:33.879414   84547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-720064 san=[127.0.0.1 192.168.39.188 localhost minikube old-k8s-version-720064]
	I1209 23:59:34.211417   84547 provision.go:177] copyRemoteCerts
	I1209 23:59:34.211475   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:59:34.211504   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.214285   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.214649   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.214686   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.214789   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.215006   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.215180   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.215345   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.301399   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:59:34.326216   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:59:34.349136   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 23:59:34.371597   84547 provision.go:87] duration metric: took 498.999263ms to configureAuth
	I1209 23:59:34.371633   84547 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:59:34.371832   84547 config.go:182] Loaded profile config "old-k8s-version-720064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 23:59:34.371902   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.374649   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.374985   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.375031   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.375161   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.375368   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.375541   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.375735   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.375897   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:34.376128   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:34.376152   84547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:59:34.588357   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:59:34.588391   84547 machine.go:96] duration metric: took 1.076442399s to provisionDockerMachine
	I1209 23:59:34.588402   84547 start.go:293] postStartSetup for "old-k8s-version-720064" (driver="kvm2")
	I1209 23:59:34.588412   84547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:59:34.588443   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.588749   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:59:34.588772   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.591413   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.591805   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.591836   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.592001   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.592176   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.592325   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.592431   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.673445   84547 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:59:34.677394   84547 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:59:34.677421   84547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:59:34.677498   84547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:59:34.677600   84547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:59:34.677716   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:59:34.686261   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:34.709412   84547 start.go:296] duration metric: took 120.993275ms for postStartSetup
	I1209 23:59:34.709467   84547 fix.go:56] duration metric: took 19.749026723s for fixHost
	I1209 23:59:34.709495   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.712458   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.712762   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.712796   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.712930   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.713158   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.713335   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.713506   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.713690   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:34.713852   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:34.713862   84547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:59:34.823993   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788774.778513264
	
	I1209 23:59:34.824022   84547 fix.go:216] guest clock: 1733788774.778513264
	I1209 23:59:34.824034   84547 fix.go:229] Guest: 2024-12-09 23:59:34.778513264 +0000 UTC Remote: 2024-12-09 23:59:34.709473288 +0000 UTC m=+249.236536627 (delta=69.039976ms)
	I1209 23:59:34.824067   84547 fix.go:200] guest clock delta is within tolerance: 69.039976ms
	I1209 23:59:34.824075   84547 start.go:83] releasing machines lock for "old-k8s-version-720064", held for 19.863672525s
	I1209 23:59:34.824110   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.824391   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:34.827165   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.827596   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.827629   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.827894   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828467   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828690   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828802   84547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:59:34.828865   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.828888   84547 ssh_runner.go:195] Run: cat /version.json
	I1209 23:59:34.828917   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.831978   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832052   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832514   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.832548   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.832572   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832748   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832751   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.832899   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.832982   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.832986   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.833198   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.833217   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.833370   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.833374   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.949044   84547 ssh_runner.go:195] Run: systemctl --version
	I1209 23:59:34.954999   84547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:59:35.100096   84547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:59:35.105764   84547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:59:35.105852   84547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:59:35.126893   84547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:59:35.126915   84547 start.go:495] detecting cgroup driver to use...
	I1209 23:59:35.126968   84547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:59:35.143236   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:59:35.157019   84547 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:59:35.157098   84547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:59:35.170608   84547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:59:35.184816   84547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:59:35.310043   84547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:59:35.486472   84547 docker.go:233] disabling docker service ...
	I1209 23:59:35.486653   84547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:59:35.501938   84547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:59:35.516997   84547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:59:35.667304   84547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:59:35.821316   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:59:35.836423   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:59:35.854633   84547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1209 23:59:35.854719   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.865592   84547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:59:35.865677   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.877247   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.889007   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.900196   84547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:59:35.912403   84547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:59:35.922919   84547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:59:35.922987   84547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:59:35.939439   84547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:59:35.949715   84547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:36.100115   84547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:59:36.207129   84547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:59:36.207206   84547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:59:36.213672   84547 start.go:563] Will wait 60s for crictl version
	I1209 23:59:36.213758   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:36.218204   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:59:36.255262   84547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:59:36.255367   84547 ssh_runner.go:195] Run: crio --version
	I1209 23:59:36.286277   84547 ssh_runner.go:195] Run: crio --version
	I1209 23:59:36.317334   84547 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1209 23:59:34.848733   83859 main.go:141] libmachine: (no-preload-048296) Calling .Start
	I1209 23:59:34.848943   83859 main.go:141] libmachine: (no-preload-048296) Ensuring networks are active...
	I1209 23:59:34.849641   83859 main.go:141] libmachine: (no-preload-048296) Ensuring network default is active
	I1209 23:59:34.850039   83859 main.go:141] libmachine: (no-preload-048296) Ensuring network mk-no-preload-048296 is active
	I1209 23:59:34.850540   83859 main.go:141] libmachine: (no-preload-048296) Getting domain xml...
	I1209 23:59:34.851224   83859 main.go:141] libmachine: (no-preload-048296) Creating domain...
	I1209 23:59:36.270761   83859 main.go:141] libmachine: (no-preload-048296) Waiting to get IP...
	I1209 23:59:36.271970   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:36.272578   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:36.272645   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:36.272536   85657 retry.go:31] will retry after 253.181092ms: waiting for machine to come up
	I1209 23:59:36.527128   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:36.527735   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:36.527762   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:36.527683   85657 retry.go:31] will retry after 297.608817ms: waiting for machine to come up
	I1209 23:59:36.827372   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:36.828819   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:36.828849   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:36.828763   85657 retry.go:31] will retry after 374.112777ms: waiting for machine to come up
	I1209 23:59:33.112105   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:35.611353   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:36.115472   84259 pod_ready.go:93] pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:36.115504   84259 pod_ready.go:82] duration metric: took 5.011153637s for pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:36.115521   84259 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:35.492415   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:37.992287   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:36.318651   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:36.321927   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:36.322336   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:36.322366   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:36.322619   84547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 23:59:36.327938   84547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:36.344152   84547 kubeadm.go:883] updating cluster {Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:59:36.344279   84547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:59:36.344328   84547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:36.391261   84547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 23:59:36.391348   84547 ssh_runner.go:195] Run: which lz4
	I1209 23:59:36.395391   84547 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:59:36.399585   84547 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:59:36.399622   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1209 23:59:37.969251   84547 crio.go:462] duration metric: took 1.573891158s to copy over tarball
	I1209 23:59:37.969362   84547 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:59:37.204687   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:37.205458   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:37.205479   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:37.205372   85657 retry.go:31] will retry after 420.490569ms: waiting for machine to come up
	I1209 23:59:37.627167   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:37.629813   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:37.629839   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:37.629687   85657 retry.go:31] will retry after 561.795207ms: waiting for machine to come up
	I1209 23:59:38.193652   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:38.194178   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:38.194200   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:38.194119   85657 retry.go:31] will retry after 925.018936ms: waiting for machine to come up
	I1209 23:59:39.121476   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:39.122014   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:39.122046   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:39.121958   85657 retry.go:31] will retry after 984.547761ms: waiting for machine to come up
	I1209 23:59:40.108478   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:40.108947   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:40.108981   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:40.108925   85657 retry.go:31] will retry after 1.261498912s: waiting for machine to come up
	I1209 23:59:41.372403   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:41.372901   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:41.372922   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:41.372884   85657 retry.go:31] will retry after 1.498283156s: waiting for machine to come up
	I1209 23:59:38.122964   84259 pod_ready.go:93] pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:38.122990   84259 pod_ready.go:82] duration metric: took 2.007459606s for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:38.123004   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:40.133257   84259 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:40.631911   84259 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:40.631940   84259 pod_ready.go:82] duration metric: took 2.508926956s for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:40.631957   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:40.492044   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:42.991179   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:41.050494   84547 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081073233s)
	I1209 23:59:41.050524   84547 crio.go:469] duration metric: took 3.081237568s to extract the tarball
	I1209 23:59:41.050533   84547 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:59:41.092694   84547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:41.125264   84547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 23:59:41.125289   84547 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 23:59:41.125360   84547 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.125378   84547 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.125388   84547 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.125360   84547 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.125436   84547 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1209 23:59:41.125466   84547 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.125497   84547 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.125515   84547 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.127285   84547 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.127325   84547 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1209 23:59:41.127376   84547 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.127405   84547 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.127421   84547 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.127293   84547 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.127293   84547 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.127300   84547 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.302277   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.314265   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1209 23:59:41.323683   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.326327   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.345042   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.345550   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.362324   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.367612   84547 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1209 23:59:41.367663   84547 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.367706   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.406644   84547 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1209 23:59:41.406691   84547 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1209 23:59:41.406783   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.450490   84547 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1209 23:59:41.450550   84547 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.450601   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.477332   84547 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1209 23:59:41.477389   84547 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.477438   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.499714   84547 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1209 23:59:41.499757   84547 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.499783   84547 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1209 23:59:41.499806   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.499818   84547 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.499861   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.505128   84547 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1209 23:59:41.505179   84547 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.505205   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.505249   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.505212   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.505306   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.505342   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.507860   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.507923   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.643108   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.644251   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.644273   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.644358   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.644400   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.650732   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.650817   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.777981   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.789388   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.789435   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.789491   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.789561   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.789592   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.802784   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.912370   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.914494   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1209 23:59:41.944460   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1209 23:59:41.944564   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1209 23:59:41.944569   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.944660   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1209 23:59:41.944662   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1209 23:59:41.944726   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1209 23:59:42.104003   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1209 23:59:42.104074   84547 cache_images.go:92] duration metric: took 978.7734ms to LoadCachedImages
	W1209 23:59:42.104176   84547 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1209 23:59:42.104198   84547 kubeadm.go:934] updating node { 192.168.39.188 8443 v1.20.0 crio true true} ...
	I1209 23:59:42.104326   84547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-720064 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.188
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:59:42.104431   84547 ssh_runner.go:195] Run: crio config
	I1209 23:59:42.154016   84547 cni.go:84] Creating CNI manager for ""
	I1209 23:59:42.154041   84547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:42.154050   84547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:59:42.154066   84547 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.188 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-720064 NodeName:old-k8s-version-720064 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.188"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.188 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 23:59:42.154226   84547 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.188
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-720064"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.188
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.188"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:59:42.154292   84547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 23:59:42.164866   84547 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:59:42.164943   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:59:42.175137   84547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1209 23:59:42.192176   84547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:59:42.210695   84547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1209 23:59:42.230364   84547 ssh_runner.go:195] Run: grep 192.168.39.188	control-plane.minikube.internal$ /etc/hosts
	I1209 23:59:42.234239   84547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.188	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:42.246747   84547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:42.379643   84547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:42.396626   84547 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064 for IP: 192.168.39.188
	I1209 23:59:42.396713   84547 certs.go:194] generating shared ca certs ...
	I1209 23:59:42.396746   84547 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:42.396942   84547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:59:42.397007   84547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:59:42.397023   84547 certs.go:256] generating profile certs ...
	I1209 23:59:42.397191   84547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/client.key
	I1209 23:59:42.397270   84547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key.abcbe3fa
	I1209 23:59:42.397330   84547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.key
	I1209 23:59:42.397516   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:59:42.397564   84547 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:59:42.397580   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:59:42.397623   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:59:42.397662   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:59:42.397701   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:59:42.397768   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:42.398726   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:59:42.450053   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:59:42.475685   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:59:42.514396   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:59:42.562383   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 23:59:42.589690   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 23:59:42.614149   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:59:42.647112   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:59:42.671117   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:59:42.694303   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:59:42.722563   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:59:42.753478   84547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:59:42.773673   84547 ssh_runner.go:195] Run: openssl version
	I1209 23:59:42.779930   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:59:42.791531   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.796422   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.796536   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.802386   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:59:42.817058   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:59:42.828570   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.833292   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.833357   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.839679   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:59:42.850729   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:59:42.861699   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.866417   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.866479   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.873602   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:59:42.884398   84547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:59:42.889137   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:59:42.896108   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:59:42.902584   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:59:42.908706   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:59:42.914313   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:59:42.919943   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:59:42.925417   84547 kubeadm.go:392] StartCluster: {Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:42.925546   84547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:59:42.925602   84547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:42.962948   84547 cri.go:89] found id: ""
	I1209 23:59:42.963030   84547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:59:42.973746   84547 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:59:42.973768   84547 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:59:42.973819   84547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:59:42.983593   84547 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:59:42.984528   84547 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-720064" does not appear in /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:59:42.985106   84547 kubeconfig.go:62] /home/jenkins/minikube-integration/19888-18950/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-720064" cluster setting kubeconfig missing "old-k8s-version-720064" context setting]
	I1209 23:59:42.985981   84547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:42.996842   84547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:59:43.007858   84547 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.188
	I1209 23:59:43.007901   84547 kubeadm.go:1160] stopping kube-system containers ...
	I1209 23:59:43.007916   84547 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 23:59:43.007982   84547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:43.046416   84547 cri.go:89] found id: ""
	I1209 23:59:43.046495   84547 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 23:59:43.063226   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:59:43.073919   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:59:43.073946   84547 kubeadm.go:157] found existing configuration files:
	
	I1209 23:59:43.073991   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:59:43.084217   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:59:43.084292   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:59:43.094196   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:59:43.104098   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:59:43.104178   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:59:43.115056   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:59:43.124696   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:59:43.124761   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:59:43.135180   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:59:43.146325   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:59:43.146392   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:59:43.156017   84547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:59:43.166584   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:43.376941   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.198180   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.431002   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.549010   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.644736   84547 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:59:44.644827   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:45.145713   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:42.872724   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:42.873229   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:42.873260   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:42.873178   85657 retry.go:31] will retry after 1.469240763s: waiting for machine to come up
	I1209 23:59:44.344825   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:44.345339   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:44.345374   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:44.345317   85657 retry.go:31] will retry after 1.85673524s: waiting for machine to come up
	I1209 23:59:46.203693   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:46.204128   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:46.204152   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:46.204065   85657 retry.go:31] will retry after 3.5180616s: waiting for machine to come up
	I1209 23:59:43.003893   84259 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:43.734778   84259 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:43.734806   84259 pod_ready.go:82] duration metric: took 3.102839103s for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:43.734821   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-d7sxm" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:43.743393   84259 pod_ready.go:93] pod "kube-proxy-d7sxm" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:43.743421   84259 pod_ready.go:82] duration metric: took 8.592191ms for pod "kube-proxy-d7sxm" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:43.743435   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:44.250028   84259 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:44.250051   84259 pod_ready.go:82] duration metric: took 506.607784ms for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:44.250063   84259 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:46.256096   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:45.494189   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:47.993074   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:45.645297   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:46.145300   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:46.644880   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:47.144955   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:47.645081   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:48.145132   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:48.645278   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.144932   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.645874   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:50.145061   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.725528   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:49.726040   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:49.726069   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:49.725982   85657 retry.go:31] will retry after 3.98915487s: waiting for machine to come up
	I1209 23:59:48.256397   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:50.757098   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:50.491110   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:52.989722   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:50.645171   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:51.144908   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:51.645198   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:52.145700   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:52.645665   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.145782   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.645678   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:54.145521   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:54.645825   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:55.145578   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.718634   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.719141   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has current primary IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.719163   83859 main.go:141] libmachine: (no-preload-048296) Found IP for machine: 192.168.61.182
	I1209 23:59:53.719173   83859 main.go:141] libmachine: (no-preload-048296) Reserving static IP address...
	I1209 23:59:53.719594   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "no-preload-048296", mac: "52:54:00:c6:cf:c7", ip: "192.168.61.182"} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.719621   83859 main.go:141] libmachine: (no-preload-048296) DBG | skip adding static IP to network mk-no-preload-048296 - found existing host DHCP lease matching {name: "no-preload-048296", mac: "52:54:00:c6:cf:c7", ip: "192.168.61.182"}
	I1209 23:59:53.719632   83859 main.go:141] libmachine: (no-preload-048296) Reserved static IP address: 192.168.61.182
	I1209 23:59:53.719641   83859 main.go:141] libmachine: (no-preload-048296) Waiting for SSH to be available...
	I1209 23:59:53.719671   83859 main.go:141] libmachine: (no-preload-048296) DBG | Getting to WaitForSSH function...
	I1209 23:59:53.722015   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.722309   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.722342   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.722445   83859 main.go:141] libmachine: (no-preload-048296) DBG | Using SSH client type: external
	I1209 23:59:53.722471   83859 main.go:141] libmachine: (no-preload-048296) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa (-rw-------)
	I1209 23:59:53.722504   83859 main.go:141] libmachine: (no-preload-048296) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:59:53.722517   83859 main.go:141] libmachine: (no-preload-048296) DBG | About to run SSH command:
	I1209 23:59:53.722529   83859 main.go:141] libmachine: (no-preload-048296) DBG | exit 0
	I1209 23:59:53.843261   83859 main.go:141] libmachine: (no-preload-048296) DBG | SSH cmd err, output: <nil>: 
	I1209 23:59:53.843673   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetConfigRaw
	I1209 23:59:53.844367   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:53.846889   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.847170   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.847203   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.847433   83859 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/config.json ...
	I1209 23:59:53.847656   83859 machine.go:93] provisionDockerMachine start ...
	I1209 23:59:53.847675   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:53.847834   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:53.849867   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.850179   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.850213   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.850372   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:53.850548   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.850702   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.850878   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:53.851058   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:53.851288   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:53.851303   83859 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:59:53.947889   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:59:53.947916   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:59:53.948220   83859 buildroot.go:166] provisioning hostname "no-preload-048296"
	I1209 23:59:53.948259   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:59:53.948496   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:53.951070   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.951479   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.951509   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.951729   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:53.951919   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.952120   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.952280   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:53.952456   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:53.952650   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:53.952672   83859 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-048296 && echo "no-preload-048296" | sudo tee /etc/hostname
	I1209 23:59:54.065706   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-048296
	
	I1209 23:59:54.065738   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.068629   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.068942   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.068975   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.069241   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.069493   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.069731   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.069939   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.070156   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:54.070325   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:54.070346   83859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-048296' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-048296/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-048296' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:59:54.180424   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:59:54.180460   83859 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:59:54.180479   83859 buildroot.go:174] setting up certificates
	I1209 23:59:54.180490   83859 provision.go:84] configureAuth start
	I1209 23:59:54.180499   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:59:54.180802   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:54.183349   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.183703   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.183732   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.183852   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.186076   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.186341   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.186372   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.186498   83859 provision.go:143] copyHostCerts
	I1209 23:59:54.186554   83859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:59:54.186564   83859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:59:54.186627   83859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:59:54.186715   83859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:59:54.186723   83859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:59:54.186744   83859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:59:54.186805   83859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:59:54.186814   83859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:59:54.186831   83859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:59:54.186881   83859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.no-preload-048296 san=[127.0.0.1 192.168.61.182 localhost minikube no-preload-048296]
	I1209 23:59:54.291230   83859 provision.go:177] copyRemoteCerts
	I1209 23:59:54.291296   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:59:54.291319   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.293968   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.294257   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.294283   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.294451   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.294654   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.294814   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.294963   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.377572   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:59:54.401222   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 23:59:54.425323   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 23:59:54.448358   83859 provision.go:87] duration metric: took 267.838318ms to configureAuth
	I1209 23:59:54.448387   83859 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:59:54.448563   83859 config.go:182] Loaded profile config "no-preload-048296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:59:54.448629   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.451082   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.451422   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.451450   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.451678   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.451906   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.452088   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.452222   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.452374   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:54.452550   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:54.452567   83859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:59:54.665629   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:59:54.665663   83859 machine.go:96] duration metric: took 817.991465ms to provisionDockerMachine
	I1209 23:59:54.665677   83859 start.go:293] postStartSetup for "no-preload-048296" (driver="kvm2")
	I1209 23:59:54.665690   83859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:59:54.665712   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.666016   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:59:54.666043   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.668993   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.669501   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.669532   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.669666   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.669859   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.670004   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.670160   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.752122   83859 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:59:54.756546   83859 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:59:54.756569   83859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:59:54.756640   83859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:59:54.756706   83859 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:59:54.756797   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:59:54.766465   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:54.792214   83859 start.go:296] duration metric: took 126.521539ms for postStartSetup
	I1209 23:59:54.792263   83859 fix.go:56] duration metric: took 19.967992145s for fixHost
	I1209 23:59:54.792284   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.794921   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.795304   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.795334   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.795549   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.795807   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.795986   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.796154   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.796333   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:54.796485   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:54.796495   83859 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:59:54.900382   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788794.870799171
	
	I1209 23:59:54.900413   83859 fix.go:216] guest clock: 1733788794.870799171
	I1209 23:59:54.900423   83859 fix.go:229] Guest: 2024-12-09 23:59:54.870799171 +0000 UTC Remote: 2024-12-09 23:59:54.792267927 +0000 UTC m=+357.892420200 (delta=78.531244ms)
	I1209 23:59:54.900443   83859 fix.go:200] guest clock delta is within tolerance: 78.531244ms
	I1209 23:59:54.900448   83859 start.go:83] releasing machines lock for "no-preload-048296", held for 20.076230285s
	I1209 23:59:54.900466   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.900763   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:54.903437   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.903762   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.903785   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.903941   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.904412   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.904583   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.904674   83859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:59:54.904735   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.904788   83859 ssh_runner.go:195] Run: cat /version.json
	I1209 23:59:54.904815   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.907540   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.907693   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.907936   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.907960   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.908083   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.908098   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.908108   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.908269   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.908332   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.908431   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.908578   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.908607   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.908750   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.908753   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.980588   83859 ssh_runner.go:195] Run: systemctl --version
	I1209 23:59:55.003079   83859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:59:55.152159   83859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:59:55.158212   83859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:59:55.158284   83859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:59:55.177510   83859 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:59:55.177539   83859 start.go:495] detecting cgroup driver to use...
	I1209 23:59:55.177616   83859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:59:55.194262   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:59:55.210699   83859 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:59:55.210770   83859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:59:55.226308   83859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:59:55.242175   83859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:59:55.361845   83859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:59:55.500415   83859 docker.go:233] disabling docker service ...
	I1209 23:59:55.500487   83859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:59:55.515689   83859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:59:55.528651   83859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:59:55.663341   83859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:59:55.776773   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:59:55.790155   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:59:55.807749   83859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:59:55.807807   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.817580   83859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:59:55.817644   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.827975   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.837871   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.848118   83859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:59:55.858322   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.867754   83859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.884626   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.894254   83859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:59:55.903128   83859 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:59:55.903187   83859 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:59:55.914887   83859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:59:55.924665   83859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:56.028206   83859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:59:56.117484   83859 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:59:56.117573   83859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:59:56.122345   83859 start.go:563] Will wait 60s for crictl version
	I1209 23:59:56.122401   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.126032   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:59:56.161884   83859 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:59:56.161978   83859 ssh_runner.go:195] Run: crio --version
	I1209 23:59:56.194758   83859 ssh_runner.go:195] Run: crio --version
	I1209 23:59:56.224190   83859 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:59:56.225560   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:56.228550   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:56.228928   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:56.228950   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:56.229163   83859 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1209 23:59:56.233615   83859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:56.245890   83859 kubeadm.go:883] updating cluster {Name:no-preload-048296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-048296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:59:56.246079   83859 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:59:56.246132   83859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:56.285601   83859 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:59:56.285629   83859 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 23:59:56.285694   83859 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:56.285734   83859 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.285761   83859 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.285813   83859 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.285858   83859 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.285900   83859 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.285810   83859 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1209 23:59:56.285983   83859 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.287414   83859 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1209 23:59:56.287436   83859 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.287436   83859 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.287446   83859 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.287465   83859 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.287500   83859 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.287550   83859 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:56.287550   83859 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.437713   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.453590   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.457708   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.468937   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1209 23:59:56.474766   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.477321   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.483791   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.530686   83859 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1209 23:59:56.530735   83859 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.530786   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.603275   83859 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1209 23:59:56.603327   83859 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.603376   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.612591   83859 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1209 23:59:56.612635   83859 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.612686   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726597   83859 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1209 23:59:56.726643   83859 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.726650   83859 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1209 23:59:56.726682   83859 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.726692   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726728   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726752   83859 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1209 23:59:56.726777   83859 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.726808   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726824   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.726882   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.726889   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.795578   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.795624   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.795646   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.795581   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.795723   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.795847   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.905584   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.909282   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.927659   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.927832   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.927928   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.932487   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:53.256494   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:55.257888   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:54.991894   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:57.491500   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:55.645853   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.145844   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.644899   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:57.145431   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:57.645625   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:58.145096   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:58.645933   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:59.145181   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:59.645624   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:00.145874   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.971745   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1209 23:59:56.971844   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1209 23:59:56.987218   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1209 23:59:56.987380   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 23:59:57.050691   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:57.064778   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:57.087868   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:57.087972   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:57.089176   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1209 23:59:57.089235   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1209 23:59:57.089262   83859 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1209 23:59:57.089269   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 23:59:57.089311   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1209 23:59:57.089355   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1209 23:59:57.098939   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1209 23:59:57.099044   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 23:59:57.148482   83859 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1209 23:59:57.148532   83859 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:57.148588   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:57.173723   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1209 23:59:57.173810   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 23:59:57.186640   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1209 23:59:57.186734   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1210 00:00:00.723670   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.634331388s)
	I1210 00:00:00.723704   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1210 00:00:00.723717   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (3.634425647s)
	I1210 00:00:00.723751   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1210 00:00:00.723728   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 00:00:00.723767   83859 ssh_runner.go:235] Completed: which crictl: (3.575163822s)
	I1210 00:00:00.723815   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:00:00.723857   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (3.550033094s)
	I1210 00:00:00.723816   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 00:00:00.723909   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (3.537159613s)
	I1210 00:00:00.723934   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1210 00:00:00.723854   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (3.624679414s)
	I1210 00:00:00.723974   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1210 00:00:00.723880   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1209 23:59:57.756271   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:00.256451   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:02.256956   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:59.491859   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:01.992860   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:00.644886   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:01.145063   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:01.645932   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.145743   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.645396   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:03.145007   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:03.645927   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:04.145876   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:04.645825   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:05.145906   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.713826   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.989917623s)
	I1210 00:00:02.713866   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1210 00:00:02.713883   83859 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.99004494s)
	I1210 00:00:02.713896   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 00:00:02.713949   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:00:02.713952   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 00:00:02.753832   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:00:04.780306   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.066273178s)
	I1210 00:00:04.780344   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1210 00:00:04.780368   83859 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1210 00:00:04.780432   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1210 00:00:04.780430   83859 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.026560705s)
	I1210 00:00:04.780544   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 00:00:04.780618   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:00:06.756731   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.976269529s)
	I1210 00:00:06.756763   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1210 00:00:06.756774   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.976141757s)
	I1210 00:00:06.756793   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1210 00:00:06.756791   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 00:00:06.756846   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 00:00:04.258390   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:06.756422   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:04.490429   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:06.991054   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:08.991247   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:05.644919   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:06.145142   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:06.645240   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:07.144864   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:07.645218   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.145246   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.645965   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:09.145663   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:09.645731   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:10.145638   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.712114   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.955245193s)
	I1210 00:00:08.712142   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1210 00:00:08.712166   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 00:00:08.712212   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 00:00:10.892294   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.180058873s)
	I1210 00:00:10.892319   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1210 00:00:10.892352   83859 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:00:10.892391   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:00:11.542174   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 00:00:11.542214   83859 cache_images.go:123] Successfully loaded all cached images
	I1210 00:00:11.542219   83859 cache_images.go:92] duration metric: took 15.256564286s to LoadCachedImages
	I1210 00:00:11.542230   83859 kubeadm.go:934] updating node { 192.168.61.182 8443 v1.31.2 crio true true} ...
	I1210 00:00:11.542334   83859 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-048296 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-048296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:00:11.542397   83859 ssh_runner.go:195] Run: crio config
	I1210 00:00:11.586768   83859 cni.go:84] Creating CNI manager for ""
	I1210 00:00:11.586787   83859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:00:11.586795   83859 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:00:11.586817   83859 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.182 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-048296 NodeName:no-preload-048296 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:00:11.586932   83859 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-048296"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.182"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.182"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:00:11.586992   83859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:00:11.597936   83859 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:00:11.597996   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 00:00:11.608068   83859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 00:00:11.624881   83859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:00:11.640934   83859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1210 00:00:11.658113   83859 ssh_runner.go:195] Run: grep 192.168.61.182	control-plane.minikube.internal$ /etc/hosts
	I1210 00:00:11.662360   83859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:00:11.674732   83859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:00:11.803743   83859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:00:11.821169   83859 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296 for IP: 192.168.61.182
	I1210 00:00:11.821191   83859 certs.go:194] generating shared ca certs ...
	I1210 00:00:11.821211   83859 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:00:11.821404   83859 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1210 00:00:11.821485   83859 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1210 00:00:11.821502   83859 certs.go:256] generating profile certs ...
	I1210 00:00:11.821583   83859 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/client.key
	I1210 00:00:11.821644   83859 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/apiserver.key.9304569d
	I1210 00:00:11.821677   83859 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/proxy-client.key
	I1210 00:00:11.821783   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1210 00:00:11.821811   83859 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1210 00:00:11.821821   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:00:11.821848   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1210 00:00:11.821881   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:00:11.821917   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1210 00:00:11.821965   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1210 00:00:11.822649   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:00:11.867065   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 00:00:11.898744   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:00:11.932711   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:00:08.758664   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:11.257156   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:11.491011   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:13.491036   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:10.645462   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.145646   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.645921   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:12.145804   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:12.644924   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:13.145055   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:13.645811   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:14.144877   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:14.645709   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.145827   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.966670   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 00:00:11.997827   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:00:12.023344   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:00:12.048872   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:00:12.074332   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:00:12.097886   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1210 00:00:12.121883   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1210 00:00:12.145451   83859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:00:12.167089   83859 ssh_runner.go:195] Run: openssl version
	I1210 00:00:12.172747   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1210 00:00:12.183718   83859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1210 00:00:12.188458   83859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1210 00:00:12.188521   83859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1210 00:00:12.194537   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:00:12.205725   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:00:12.216441   83859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:00:12.221179   83859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:00:12.221237   83859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:00:12.226942   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:00:12.237830   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1210 00:00:12.248188   83859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1210 00:00:12.253689   83859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1210 00:00:12.253747   83859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1210 00:00:12.259326   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1210 00:00:12.269796   83859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:00:12.274316   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 00:00:12.280263   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 00:00:12.286235   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 00:00:12.292330   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 00:00:12.298347   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 00:00:12.304186   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 00:00:12.310189   83859 kubeadm.go:392] StartCluster: {Name:no-preload-048296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-048296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:00:12.310296   83859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:00:12.310349   83859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:00:12.346589   83859 cri.go:89] found id: ""
	I1210 00:00:12.346666   83859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 00:00:12.357674   83859 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 00:00:12.357701   83859 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 00:00:12.357753   83859 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 00:00:12.367817   83859 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 00:00:12.368827   83859 kubeconfig.go:125] found "no-preload-048296" server: "https://192.168.61.182:8443"
	I1210 00:00:12.371117   83859 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 00:00:12.380367   83859 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.182
	I1210 00:00:12.380393   83859 kubeadm.go:1160] stopping kube-system containers ...
	I1210 00:00:12.380404   83859 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 00:00:12.380446   83859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:00:12.413083   83859 cri.go:89] found id: ""
	I1210 00:00:12.413143   83859 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 00:00:12.429819   83859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:00:12.439244   83859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:00:12.439269   83859 kubeadm.go:157] found existing configuration files:
	
	I1210 00:00:12.439336   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:00:12.448182   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:00:12.448257   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:00:12.457759   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:00:12.467372   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:00:12.467449   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:00:12.476877   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:00:12.485831   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:00:12.485898   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:00:12.495537   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:00:12.504333   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:00:12.504398   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:00:12.514572   83859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:00:12.524134   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:12.619683   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.361844   83859 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.742129567s)
	I1210 00:00:14.361876   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.585241   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.653166   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.771204   83859 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:00:14.771306   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.272143   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.771676   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.789350   83859 api_server.go:72] duration metric: took 1.018150373s to wait for apiserver process to appear ...
	I1210 00:00:15.789378   83859 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:00:15.789396   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:15.789878   83859 api_server.go:269] stopped: https://192.168.61.182:8443/healthz: Get "https://192.168.61.182:8443/healthz": dial tcp 192.168.61.182:8443: connect: connection refused
	I1210 00:00:16.289518   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:13.757326   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:15.757843   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:15.491876   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:17.991160   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:18.572189   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 00:00:18.572233   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 00:00:18.572262   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:18.607691   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 00:00:18.607732   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 00:00:18.790007   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:18.794252   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 00:00:18.794281   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 00:00:19.289819   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:19.294079   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 00:00:19.294104   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 00:00:19.789647   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:19.794119   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 200:
	ok
	I1210 00:00:19.800447   83859 api_server.go:141] control plane version: v1.31.2
	I1210 00:00:19.800477   83859 api_server.go:131] duration metric: took 4.011091942s to wait for apiserver health ...
	I1210 00:00:19.800488   83859 cni.go:84] Creating CNI manager for ""
	I1210 00:00:19.800496   83859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:00:19.802341   83859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:00:15.645243   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:16.145027   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:16.645018   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:17.145001   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:17.644893   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:18.145189   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:18.645579   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.144946   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.645832   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:20.145634   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.803715   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:00:19.814324   83859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:00:19.861326   83859 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:00:19.873554   83859 system_pods.go:59] 8 kube-system pods found
	I1210 00:00:19.873588   83859 system_pods.go:61] "coredns-7c65d6cfc9-smnt7" [7d85cb49-3cbb-4133-acab-284ddf0b72f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 00:00:19.873597   83859 system_pods.go:61] "etcd-no-preload-048296" [3eeaede5-b759-49e5-8267-0aaf3c374178] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 00:00:19.873606   83859 system_pods.go:61] "kube-apiserver-no-preload-048296" [8e9d39ce-218a-4fe4-86ea-234d97a5e51d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 00:00:19.873615   83859 system_pods.go:61] "kube-controller-manager-no-preload-048296" [e28ccf7d-13e4-4b55-81d0-2dd348584bca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 00:00:19.873622   83859 system_pods.go:61] "kube-proxy-z479r" [eb6588a6-ac3c-4066-b69e-5613b03ade53] Running
	I1210 00:00:19.873634   83859 system_pods.go:61] "kube-scheduler-no-preload-048296" [0a184373-35d4-4ac6-92fb-65a4faf1c2e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 00:00:19.873646   83859 system_pods.go:61] "metrics-server-6867b74b74-sd58c" [19d64157-7b9b-4a39-8e0c-2c48729fbc93] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:00:19.873653   83859 system_pods.go:61] "storage-provisioner" [30b3ed5d-f843-4090-827f-e1ee6a7ea274] Running
	I1210 00:00:19.873661   83859 system_pods.go:74] duration metric: took 12.308897ms to wait for pod list to return data ...
	I1210 00:00:19.873668   83859 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:00:19.877459   83859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:00:19.877482   83859 node_conditions.go:123] node cpu capacity is 2
	I1210 00:00:19.877496   83859 node_conditions.go:105] duration metric: took 3.822698ms to run NodePressure ...
	I1210 00:00:19.877513   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:20.145838   83859 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 00:00:20.151038   83859 kubeadm.go:739] kubelet initialised
	I1210 00:00:20.151059   83859 kubeadm.go:740] duration metric: took 5.1997ms waiting for restarted kubelet to initialise ...
	I1210 00:00:20.151068   83859 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:00:20.157554   83859 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:20.162413   83859 pod_ready.go:98] node "no-preload-048296" hosting pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.162437   83859 pod_ready.go:82] duration metric: took 4.858113ms for pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace to be "Ready" ...
	E1210 00:00:20.162446   83859 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-048296" hosting pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.162452   83859 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:20.167285   83859 pod_ready.go:98] node "no-preload-048296" hosting pod "etcd-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.167312   83859 pod_ready.go:82] duration metric: took 4.84903ms for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	E1210 00:00:20.167321   83859 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-048296" hosting pod "etcd-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.167328   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:20.172365   83859 pod_ready.go:98] node "no-preload-048296" hosting pod "kube-apiserver-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.172386   83859 pod_ready.go:82] duration metric: took 5.052446ms for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	E1210 00:00:20.172395   83859 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-048296" hosting pod "kube-apiserver-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.172401   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:18.257283   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:20.756752   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:20.490469   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:22.990635   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:20.645094   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:21.145179   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:21.645829   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.145159   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.645390   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:23.145663   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:23.645205   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:24.145625   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:24.645299   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:25.144971   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.180104   83859 pod_ready.go:103] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:24.680524   83859 pod_ready.go:103] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:26.680818   83859 pod_ready.go:103] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:22.757621   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:25.257680   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:24.991719   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:27.490099   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:25.644977   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:26.145967   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:26.645297   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:27.144972   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:27.645030   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.145227   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.645104   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:29.144957   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:29.645516   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:30.145343   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.678941   83859 pod_ready.go:93] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:00:28.678972   83859 pod_ready.go:82] duration metric: took 8.506560777s for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:28.678985   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-z479r" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:28.684896   83859 pod_ready.go:93] pod "kube-proxy-z479r" in "kube-system" namespace has status "Ready":"True"
	I1210 00:00:28.684917   83859 pod_ready.go:82] duration metric: took 5.924915ms for pod "kube-proxy-z479r" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:28.684926   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:30.691918   83859 pod_ready.go:103] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:31.691333   83859 pod_ready.go:93] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:00:31.691362   83859 pod_ready.go:82] duration metric: took 3.006429416s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:31.691372   83859 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:27.756937   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:30.260931   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:29.990322   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:32.489835   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:30.645218   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:31.145393   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:31.645952   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:32.145695   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:32.644910   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.146012   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.645636   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:34.145664   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:34.645813   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:35.145365   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.697948   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:36.197360   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:32.756044   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:34.756585   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:36.756683   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:34.490676   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:36.491117   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:38.991231   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:35.645823   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:36.145410   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:36.645665   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:37.144994   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:37.645701   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.144880   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.645735   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:39.145806   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:39.645695   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:40.145692   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.198422   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:40.697971   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:39.256015   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:41.256607   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:41.490276   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:43.989182   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:40.644935   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:41.145320   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:41.645470   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:42.145714   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:42.644961   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:43.145687   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:43.644990   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:44.144958   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:44.645780   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:44.645870   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:44.680200   84547 cri.go:89] found id: ""
	I1210 00:00:44.680321   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.680334   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:44.680343   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:44.680413   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:44.715278   84547 cri.go:89] found id: ""
	I1210 00:00:44.715305   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.715312   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:44.715318   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:44.715377   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:44.749908   84547 cri.go:89] found id: ""
	I1210 00:00:44.749933   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.749941   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:44.749946   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:44.750008   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:44.786851   84547 cri.go:89] found id: ""
	I1210 00:00:44.786882   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.786893   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:44.786901   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:44.786966   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:44.830060   84547 cri.go:89] found id: ""
	I1210 00:00:44.830106   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.830117   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:44.830125   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:44.830191   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:44.865525   84547 cri.go:89] found id: ""
	I1210 00:00:44.865560   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.865571   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:44.865579   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:44.865643   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:44.902529   84547 cri.go:89] found id: ""
	I1210 00:00:44.902565   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.902575   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:44.902584   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:44.902647   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:44.938876   84547 cri.go:89] found id: ""
	I1210 00:00:44.938904   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.938914   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:44.938925   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:44.938939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:44.992533   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:44.992570   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:45.005990   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:45.006020   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:45.116774   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:45.116797   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:45.116810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:45.187376   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:45.187411   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:42.698555   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:45.198082   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:43.756559   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:45.756755   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:45.990399   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:47.991485   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:47.728964   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:47.742500   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:47.742560   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:47.775751   84547 cri.go:89] found id: ""
	I1210 00:00:47.775783   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.775793   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:47.775799   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:47.775848   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:47.810202   84547 cri.go:89] found id: ""
	I1210 00:00:47.810228   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.810235   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:47.810241   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:47.810302   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:47.844686   84547 cri.go:89] found id: ""
	I1210 00:00:47.844730   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.844739   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:47.844745   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:47.844802   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:47.884076   84547 cri.go:89] found id: ""
	I1210 00:00:47.884108   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.884119   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:47.884127   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:47.884188   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:47.923697   84547 cri.go:89] found id: ""
	I1210 00:00:47.923722   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.923729   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:47.923734   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:47.923791   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:47.955790   84547 cri.go:89] found id: ""
	I1210 00:00:47.955816   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.955824   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:47.955829   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:47.955890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:48.021501   84547 cri.go:89] found id: ""
	I1210 00:00:48.021529   84547 logs.go:282] 0 containers: []
	W1210 00:00:48.021537   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:48.021543   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:48.021592   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:48.054645   84547 cri.go:89] found id: ""
	I1210 00:00:48.054675   84547 logs.go:282] 0 containers: []
	W1210 00:00:48.054688   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:48.054699   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:48.054714   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:48.135706   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:48.135729   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:48.135746   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:48.212397   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:48.212438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:48.254002   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:48.254033   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:48.305858   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:48.305891   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:47.697503   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:49.698076   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:47.758210   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:50.257400   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:50.490507   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:52.989527   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:50.819644   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:50.834153   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:50.834233   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:50.872357   84547 cri.go:89] found id: ""
	I1210 00:00:50.872391   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.872402   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:50.872409   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:50.872471   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:50.909793   84547 cri.go:89] found id: ""
	I1210 00:00:50.909826   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.909839   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:50.909848   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:50.909915   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:50.947167   84547 cri.go:89] found id: ""
	I1210 00:00:50.947206   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.947219   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:50.947228   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:50.947304   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:50.987194   84547 cri.go:89] found id: ""
	I1210 00:00:50.987219   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.987228   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:50.987234   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:50.987287   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:51.027433   84547 cri.go:89] found id: ""
	I1210 00:00:51.027464   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.027476   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:51.027483   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:51.027586   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:51.062156   84547 cri.go:89] found id: ""
	I1210 00:00:51.062186   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.062200   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:51.062208   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:51.062267   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:51.099161   84547 cri.go:89] found id: ""
	I1210 00:00:51.099187   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.099195   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:51.099202   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:51.099249   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:51.134400   84547 cri.go:89] found id: ""
	I1210 00:00:51.134432   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.134445   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:51.134459   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:51.134474   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:51.171842   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:51.171869   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:51.222815   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:51.222854   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:51.236499   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:51.236537   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:51.307835   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:51.307856   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:51.307871   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:53.888771   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:53.902167   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:53.902234   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:53.940724   84547 cri.go:89] found id: ""
	I1210 00:00:53.940748   84547 logs.go:282] 0 containers: []
	W1210 00:00:53.940756   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:53.940767   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:53.940823   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:53.975076   84547 cri.go:89] found id: ""
	I1210 00:00:53.975111   84547 logs.go:282] 0 containers: []
	W1210 00:00:53.975122   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:53.975128   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:53.975191   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:54.011123   84547 cri.go:89] found id: ""
	I1210 00:00:54.011149   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.011157   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:54.011162   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:54.011207   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:54.043594   84547 cri.go:89] found id: ""
	I1210 00:00:54.043620   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.043628   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:54.043633   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:54.043679   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:54.076172   84547 cri.go:89] found id: ""
	I1210 00:00:54.076208   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.076219   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:54.076227   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:54.076292   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:54.110646   84547 cri.go:89] found id: ""
	I1210 00:00:54.110679   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.110691   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:54.110711   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:54.110784   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:54.145901   84547 cri.go:89] found id: ""
	I1210 00:00:54.145929   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.145940   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:54.145947   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:54.146007   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:54.178589   84547 cri.go:89] found id: ""
	I1210 00:00:54.178618   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.178629   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:54.178639   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:54.178652   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:54.231005   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:54.231040   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:54.244583   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:54.244608   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:54.318759   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:54.318787   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:54.318800   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:54.395975   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:54.396012   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:52.198012   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:54.199221   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:56.697929   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:52.756419   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:54.757807   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:57.257555   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:55.490621   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:57.491632   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:56.969699   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:56.985159   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:56.985232   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:57.020768   84547 cri.go:89] found id: ""
	I1210 00:00:57.020798   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.020807   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:57.020812   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:57.020861   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:57.055796   84547 cri.go:89] found id: ""
	I1210 00:00:57.055826   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.055834   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:57.055839   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:57.055897   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:57.089345   84547 cri.go:89] found id: ""
	I1210 00:00:57.089375   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.089385   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:57.089392   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:57.089460   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:57.123969   84547 cri.go:89] found id: ""
	I1210 00:00:57.123993   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.124004   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:57.124012   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:57.124075   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:57.158744   84547 cri.go:89] found id: ""
	I1210 00:00:57.158769   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.158777   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:57.158783   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:57.158842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:57.203933   84547 cri.go:89] found id: ""
	I1210 00:00:57.203954   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.203962   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:57.203968   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:57.204025   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:57.238180   84547 cri.go:89] found id: ""
	I1210 00:00:57.238213   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.238224   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:57.238231   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:57.238287   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:57.273583   84547 cri.go:89] found id: ""
	I1210 00:00:57.273612   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.273623   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:57.273633   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:57.273648   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:57.345992   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:57.346019   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:57.346035   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:57.428335   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:57.428369   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:57.466437   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:57.466472   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:57.517064   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:57.517099   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:00.030599   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:00.045151   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:00.045229   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:00.083050   84547 cri.go:89] found id: ""
	I1210 00:01:00.083074   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.083085   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:00.083093   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:00.083152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:00.119082   84547 cri.go:89] found id: ""
	I1210 00:01:00.119107   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.119120   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:00.119126   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:00.119185   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:00.166888   84547 cri.go:89] found id: ""
	I1210 00:01:00.166921   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.166931   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:00.166939   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:00.166998   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:00.209504   84547 cri.go:89] found id: ""
	I1210 00:01:00.209526   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.209533   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:00.209539   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:00.209595   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:00.247649   84547 cri.go:89] found id: ""
	I1210 00:01:00.247672   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.247680   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:00.247686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:00.247736   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:00.294419   84547 cri.go:89] found id: ""
	I1210 00:01:00.294445   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.294455   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:00.294463   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:00.294526   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:00.328634   84547 cri.go:89] found id: ""
	I1210 00:01:00.328667   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.328677   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:00.328684   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:00.328751   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:00.364667   84547 cri.go:89] found id: ""
	I1210 00:01:00.364696   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.364724   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:00.364733   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:00.364745   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:00.377518   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:00.377552   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:00.449145   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:00.449166   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:00.449178   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:58.698000   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:01.196968   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:59.756367   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:01.756407   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:59.989896   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:01.991121   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:00.529462   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:00.529499   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:00.570471   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:00.570503   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:03.122835   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:03.136329   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:03.136405   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:03.170352   84547 cri.go:89] found id: ""
	I1210 00:01:03.170380   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.170388   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:03.170393   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:03.170454   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:03.204339   84547 cri.go:89] found id: ""
	I1210 00:01:03.204368   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.204379   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:03.204386   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:03.204448   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:03.237516   84547 cri.go:89] found id: ""
	I1210 00:01:03.237559   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.237572   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:03.237579   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:03.237641   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:03.273379   84547 cri.go:89] found id: ""
	I1210 00:01:03.273407   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.273416   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:03.273421   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:03.274017   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:03.308987   84547 cri.go:89] found id: ""
	I1210 00:01:03.309026   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.309038   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:03.309046   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:03.309102   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:03.341913   84547 cri.go:89] found id: ""
	I1210 00:01:03.341943   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.341954   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:03.341961   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:03.342009   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:03.375403   84547 cri.go:89] found id: ""
	I1210 00:01:03.375429   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.375437   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:03.375442   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:03.375494   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:03.408352   84547 cri.go:89] found id: ""
	I1210 00:01:03.408380   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.408387   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:03.408397   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:03.408409   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:03.483789   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:03.483831   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:03.532021   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:03.532050   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:03.585095   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:03.585129   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:03.600062   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:03.600089   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:03.671633   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:03.197410   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:05.698110   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:03.757014   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:06.257259   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:04.490159   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:06.490493   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:08.991447   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:06.172345   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:06.199809   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:06.199897   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:06.240690   84547 cri.go:89] found id: ""
	I1210 00:01:06.240749   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.240760   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:06.240769   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:06.240822   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:06.275743   84547 cri.go:89] found id: ""
	I1210 00:01:06.275770   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.275779   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:06.275786   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:06.275848   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:06.310399   84547 cri.go:89] found id: ""
	I1210 00:01:06.310427   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.310438   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:06.310445   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:06.310507   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:06.345278   84547 cri.go:89] found id: ""
	I1210 00:01:06.345308   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.345320   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:06.345328   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:06.345394   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:06.381926   84547 cri.go:89] found id: ""
	I1210 00:01:06.381961   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.381988   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:06.381994   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:06.382064   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:06.415341   84547 cri.go:89] found id: ""
	I1210 00:01:06.415367   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.415377   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:06.415385   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:06.415438   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:06.449701   84547 cri.go:89] found id: ""
	I1210 00:01:06.449733   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.449743   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:06.449750   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:06.449814   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:06.484271   84547 cri.go:89] found id: ""
	I1210 00:01:06.484298   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.484307   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:06.484317   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:06.484333   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:06.570279   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:06.570313   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:06.607812   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:06.607845   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:06.658206   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:06.658243   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:06.670949   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:06.670978   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:06.745785   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:09.246367   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:09.259390   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:09.259462   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:09.291811   84547 cri.go:89] found id: ""
	I1210 00:01:09.291841   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.291853   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:09.291865   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:09.291922   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:09.326996   84547 cri.go:89] found id: ""
	I1210 00:01:09.327027   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.327038   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:09.327045   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:09.327109   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:09.361026   84547 cri.go:89] found id: ""
	I1210 00:01:09.361062   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.361073   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:09.361081   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:09.361152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:09.397116   84547 cri.go:89] found id: ""
	I1210 00:01:09.397148   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.397159   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:09.397166   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:09.397225   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:09.428989   84547 cri.go:89] found id: ""
	I1210 00:01:09.429025   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.429039   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:09.429046   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:09.429111   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:09.462491   84547 cri.go:89] found id: ""
	I1210 00:01:09.462535   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.462547   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:09.462572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:09.462652   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:09.502705   84547 cri.go:89] found id: ""
	I1210 00:01:09.502728   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.502735   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:09.502740   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:09.502798   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:09.538721   84547 cri.go:89] found id: ""
	I1210 00:01:09.538744   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.538754   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:09.538766   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:09.538779   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:09.590732   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:09.590768   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:09.604928   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:09.604960   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:09.681457   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:09.681484   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:09.681498   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:09.758116   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:09.758149   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:07.698904   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:10.198215   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:08.755738   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:10.756172   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:10.992252   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:13.491283   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:12.307016   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:12.320284   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:12.320371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:12.351984   84547 cri.go:89] found id: ""
	I1210 00:01:12.352007   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.352016   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:12.352024   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:12.352086   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:12.387797   84547 cri.go:89] found id: ""
	I1210 00:01:12.387829   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.387840   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:12.387848   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:12.387924   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:12.417934   84547 cri.go:89] found id: ""
	I1210 00:01:12.417968   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.417979   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:12.417987   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:12.418048   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:12.451771   84547 cri.go:89] found id: ""
	I1210 00:01:12.451802   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.451815   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:12.451822   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:12.451890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:12.485007   84547 cri.go:89] found id: ""
	I1210 00:01:12.485037   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.485048   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:12.485055   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:12.485117   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:12.527851   84547 cri.go:89] found id: ""
	I1210 00:01:12.527879   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.527895   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:12.527905   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:12.527967   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:12.560341   84547 cri.go:89] found id: ""
	I1210 00:01:12.560364   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.560372   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:12.560377   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:12.560428   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:12.593118   84547 cri.go:89] found id: ""
	I1210 00:01:12.593143   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.593150   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:12.593158   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:12.593169   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:12.658694   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:12.658717   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:12.658749   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:12.739742   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:12.739776   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:12.780949   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:12.780977   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:12.829570   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:12.829607   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:15.344239   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:15.358424   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:15.358527   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:15.390712   84547 cri.go:89] found id: ""
	I1210 00:01:15.390744   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.390757   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:15.390764   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:15.390847   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:15.424198   84547 cri.go:89] found id: ""
	I1210 00:01:15.424226   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.424234   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:15.424239   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:15.424306   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:15.458340   84547 cri.go:89] found id: ""
	I1210 00:01:15.458363   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.458370   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:15.458377   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:15.458422   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:15.491468   84547 cri.go:89] found id: ""
	I1210 00:01:15.491497   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.491507   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:15.491520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:15.491597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:12.698224   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:15.197841   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:12.756618   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:14.759492   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:17.256075   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:15.491494   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:17.989410   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:15.528405   84547 cri.go:89] found id: ""
	I1210 00:01:15.528437   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.528448   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:15.528455   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:15.528517   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:15.562966   84547 cri.go:89] found id: ""
	I1210 00:01:15.562995   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.563005   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:15.563012   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:15.563063   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:15.595815   84547 cri.go:89] found id: ""
	I1210 00:01:15.595838   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.595845   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:15.595850   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:15.595907   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:15.631296   84547 cri.go:89] found id: ""
	I1210 00:01:15.631322   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.631333   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:15.631347   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:15.631362   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:15.680177   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:15.680213   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:15.693685   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:15.693724   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:15.760285   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:15.760312   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:15.760326   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:15.837814   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:15.837855   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:18.377112   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:18.390167   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:18.390230   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:18.424095   84547 cri.go:89] found id: ""
	I1210 00:01:18.424127   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.424140   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:18.424150   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:18.424216   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:18.456840   84547 cri.go:89] found id: ""
	I1210 00:01:18.456868   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.456876   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:18.456882   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:18.456938   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:18.492109   84547 cri.go:89] found id: ""
	I1210 00:01:18.492134   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.492145   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:18.492153   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:18.492212   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:18.530445   84547 cri.go:89] found id: ""
	I1210 00:01:18.530472   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.530480   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:18.530486   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:18.530549   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:18.567036   84547 cri.go:89] found id: ""
	I1210 00:01:18.567060   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.567070   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:18.567077   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:18.567136   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:18.606829   84547 cri.go:89] found id: ""
	I1210 00:01:18.606853   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.606863   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:18.606870   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:18.606927   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:18.646032   84547 cri.go:89] found id: ""
	I1210 00:01:18.646061   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.646070   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:18.646075   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:18.646127   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:18.683969   84547 cri.go:89] found id: ""
	I1210 00:01:18.683997   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.684008   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:18.684019   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:18.684036   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:18.735760   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:18.735807   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:18.751689   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:18.751724   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:18.823252   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:18.823272   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:18.823287   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:18.908071   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:18.908110   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:17.698577   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:20.197901   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:19.757398   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:22.255920   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:20.490512   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:22.990168   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:21.445415   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:21.458091   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:21.458152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:21.492615   84547 cri.go:89] found id: ""
	I1210 00:01:21.492646   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.492657   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:21.492664   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:21.492718   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:21.529564   84547 cri.go:89] found id: ""
	I1210 00:01:21.529586   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.529594   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:21.529599   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:21.529669   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:21.566409   84547 cri.go:89] found id: ""
	I1210 00:01:21.566441   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.566450   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:21.566456   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:21.566528   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:21.601783   84547 cri.go:89] found id: ""
	I1210 00:01:21.601815   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.601827   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:21.601835   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:21.601895   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:21.635740   84547 cri.go:89] found id: ""
	I1210 00:01:21.635762   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.635769   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:21.635775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:21.635831   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:21.667777   84547 cri.go:89] found id: ""
	I1210 00:01:21.667806   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.667826   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:21.667834   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:21.667894   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:21.704355   84547 cri.go:89] found id: ""
	I1210 00:01:21.704380   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.704388   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:21.704398   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:21.704457   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:21.737365   84547 cri.go:89] found id: ""
	I1210 00:01:21.737398   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.737410   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:21.737422   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:21.737438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:21.751394   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:21.751434   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:21.823217   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:21.823240   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:21.823255   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:21.910097   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:21.910131   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:21.950087   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:21.950123   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:24.504626   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:24.517920   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:24.517997   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:24.551766   84547 cri.go:89] found id: ""
	I1210 00:01:24.551806   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.551814   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:24.551821   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:24.551880   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:24.588210   84547 cri.go:89] found id: ""
	I1210 00:01:24.588246   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.588256   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:24.588263   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:24.588341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:24.620569   84547 cri.go:89] found id: ""
	I1210 00:01:24.620598   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.620607   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:24.620613   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:24.620673   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:24.657527   84547 cri.go:89] found id: ""
	I1210 00:01:24.657550   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.657558   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:24.657564   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:24.657636   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:24.689382   84547 cri.go:89] found id: ""
	I1210 00:01:24.689410   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.689418   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:24.689423   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:24.689475   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:24.727170   84547 cri.go:89] found id: ""
	I1210 00:01:24.727207   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.727224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:24.727230   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:24.727280   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:24.760733   84547 cri.go:89] found id: ""
	I1210 00:01:24.760759   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.760769   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:24.760775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:24.760842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:24.794750   84547 cri.go:89] found id: ""
	I1210 00:01:24.794782   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.794791   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:24.794799   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:24.794810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:24.847403   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:24.847441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:24.862240   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:24.862273   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:24.936373   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:24.936396   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:24.936409   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:25.011126   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:25.011160   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:22.198527   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:24.697981   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:24.257466   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:26.258086   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:24.990792   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:26.991843   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:27.551134   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:27.564526   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:27.564665   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:27.600652   84547 cri.go:89] found id: ""
	I1210 00:01:27.600677   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.600685   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:27.600690   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:27.600753   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:27.643593   84547 cri.go:89] found id: ""
	I1210 00:01:27.643625   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.643636   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:27.643649   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:27.643705   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:27.680633   84547 cri.go:89] found id: ""
	I1210 00:01:27.680664   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.680676   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:27.680684   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:27.680748   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:27.721790   84547 cri.go:89] found id: ""
	I1210 00:01:27.721824   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.721832   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:27.721837   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:27.721901   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:27.756462   84547 cri.go:89] found id: ""
	I1210 00:01:27.756492   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.756504   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:27.756512   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:27.756574   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:27.792692   84547 cri.go:89] found id: ""
	I1210 00:01:27.792728   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.792838   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:27.792851   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:27.792912   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:27.830065   84547 cri.go:89] found id: ""
	I1210 00:01:27.830098   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.830107   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:27.830116   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:27.830182   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:27.867516   84547 cri.go:89] found id: ""
	I1210 00:01:27.867557   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.867580   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:27.867595   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:27.867611   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:27.917622   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:27.917660   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:27.931687   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:27.931717   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:28.003827   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:28.003860   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:28.003876   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:28.081375   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:28.081410   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:27.197999   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:29.198364   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:31.698061   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:28.755855   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:30.756687   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:29.489992   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:31.490850   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:33.990464   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:30.622885   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:30.635990   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:30.636065   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:30.671418   84547 cri.go:89] found id: ""
	I1210 00:01:30.671453   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.671465   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:30.671475   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:30.671548   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:30.707168   84547 cri.go:89] found id: ""
	I1210 00:01:30.707203   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.707214   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:30.707222   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:30.707275   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:30.741349   84547 cri.go:89] found id: ""
	I1210 00:01:30.741377   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.741386   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:30.741395   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:30.741449   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:30.774952   84547 cri.go:89] found id: ""
	I1210 00:01:30.774985   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.774997   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:30.775005   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:30.775062   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:30.809596   84547 cri.go:89] found id: ""
	I1210 00:01:30.809623   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.809631   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:30.809636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:30.809699   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:30.844256   84547 cri.go:89] found id: ""
	I1210 00:01:30.844288   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.844300   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:30.844308   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:30.844371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:30.876086   84547 cri.go:89] found id: ""
	I1210 00:01:30.876113   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.876124   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:30.876131   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:30.876195   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:30.910858   84547 cri.go:89] found id: ""
	I1210 00:01:30.910884   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.910895   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:30.910905   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:30.910920   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:30.990855   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:30.990876   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:30.990888   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:31.069186   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:31.069221   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:31.109653   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:31.109689   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:31.164962   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:31.165001   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:33.679784   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:33.692222   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:33.692310   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:33.727937   84547 cri.go:89] found id: ""
	I1210 00:01:33.727960   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.727967   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:33.727973   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:33.728022   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:33.762122   84547 cri.go:89] found id: ""
	I1210 00:01:33.762151   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.762162   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:33.762171   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:33.762237   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:33.793855   84547 cri.go:89] found id: ""
	I1210 00:01:33.793883   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.793894   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:33.793903   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:33.793961   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:33.827240   84547 cri.go:89] found id: ""
	I1210 00:01:33.827280   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.827292   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:33.827300   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:33.827366   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:33.863705   84547 cri.go:89] found id: ""
	I1210 00:01:33.863728   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.863738   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:33.863743   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:33.863792   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:33.897189   84547 cri.go:89] found id: ""
	I1210 00:01:33.897213   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.897224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:33.897229   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:33.897282   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:33.930001   84547 cri.go:89] found id: ""
	I1210 00:01:33.930034   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.930044   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:33.930052   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:33.930113   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:33.967330   84547 cri.go:89] found id: ""
	I1210 00:01:33.967359   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.967367   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:33.967378   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:33.967390   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:34.051952   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:34.051996   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:34.087729   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:34.087762   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:34.137879   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:34.137915   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:34.151989   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:34.152025   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:34.225587   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:33.698633   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:36.197023   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:33.257021   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:35.258050   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:36.489900   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:38.490643   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:36.726065   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:36.740513   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:36.740594   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:36.774959   84547 cri.go:89] found id: ""
	I1210 00:01:36.774991   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.775000   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:36.775005   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:36.775070   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:36.808888   84547 cri.go:89] found id: ""
	I1210 00:01:36.808921   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.808934   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:36.808941   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:36.809001   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:36.841695   84547 cri.go:89] found id: ""
	I1210 00:01:36.841727   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.841738   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:36.841748   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:36.841840   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:36.877479   84547 cri.go:89] found id: ""
	I1210 00:01:36.877509   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.877522   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:36.877531   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:36.877600   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:36.909232   84547 cri.go:89] found id: ""
	I1210 00:01:36.909257   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.909265   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:36.909271   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:36.909328   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:36.945858   84547 cri.go:89] found id: ""
	I1210 00:01:36.945892   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.945904   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:36.945912   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:36.945981   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:36.978559   84547 cri.go:89] found id: ""
	I1210 00:01:36.978592   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.978604   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:36.978611   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:36.978674   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:37.015523   84547 cri.go:89] found id: ""
	I1210 00:01:37.015555   84547 logs.go:282] 0 containers: []
	W1210 00:01:37.015587   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:37.015598   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:37.015613   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:37.094825   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:37.094876   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:37.134442   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:37.134470   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:37.184691   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:37.184728   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:37.198801   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:37.198828   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:37.268415   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:39.768790   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:39.783106   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:39.783184   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:39.816629   84547 cri.go:89] found id: ""
	I1210 00:01:39.816655   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.816665   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:39.816672   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:39.816749   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:39.851518   84547 cri.go:89] found id: ""
	I1210 00:01:39.851550   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.851581   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:39.851590   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:39.851648   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:39.887790   84547 cri.go:89] found id: ""
	I1210 00:01:39.887827   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.887842   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:39.887852   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:39.887924   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:39.922230   84547 cri.go:89] found id: ""
	I1210 00:01:39.922255   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.922262   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:39.922268   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:39.922332   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:39.957142   84547 cri.go:89] found id: ""
	I1210 00:01:39.957174   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.957184   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:39.957192   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:39.957254   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:40.004583   84547 cri.go:89] found id: ""
	I1210 00:01:40.004609   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.004618   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:40.004624   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:40.004675   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:40.038278   84547 cri.go:89] found id: ""
	I1210 00:01:40.038300   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.038308   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:40.038313   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:40.038366   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:40.071920   84547 cri.go:89] found id: ""
	I1210 00:01:40.071947   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.071954   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:40.071963   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:40.071973   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:40.142005   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:40.142033   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:40.142049   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:40.219413   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:40.219452   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:40.260786   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:40.260822   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:40.315943   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:40.315988   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:38.198247   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:40.198455   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:37.756243   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:39.756567   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:41.757107   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:40.990316   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:42.990948   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:42.832592   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:42.847532   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:42.847630   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:42.879313   84547 cri.go:89] found id: ""
	I1210 00:01:42.879341   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.879348   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:42.879354   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:42.879403   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:42.914839   84547 cri.go:89] found id: ""
	I1210 00:01:42.914866   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.914877   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:42.914884   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:42.914947   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:42.949514   84547 cri.go:89] found id: ""
	I1210 00:01:42.949553   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.949564   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:42.949572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:42.949632   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:42.987974   84547 cri.go:89] found id: ""
	I1210 00:01:42.988004   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.988021   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:42.988029   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:42.988087   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:43.022835   84547 cri.go:89] found id: ""
	I1210 00:01:43.022860   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.022867   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:43.022873   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:43.022921   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:43.055940   84547 cri.go:89] found id: ""
	I1210 00:01:43.055967   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.055975   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:43.055981   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:43.056030   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:43.088728   84547 cri.go:89] found id: ""
	I1210 00:01:43.088754   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.088762   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:43.088769   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:43.088827   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:43.122809   84547 cri.go:89] found id: ""
	I1210 00:01:43.122842   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.122853   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:43.122865   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:43.122881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:43.172243   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:43.172277   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:43.186566   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:43.186596   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:43.264301   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:43.264327   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:43.264341   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:43.339804   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:43.339848   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:42.698066   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:45.197813   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:44.256069   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:46.256746   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:45.490253   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:47.989687   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:45.881356   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:45.894702   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:45.894779   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:45.928927   84547 cri.go:89] found id: ""
	I1210 00:01:45.928951   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.928958   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:45.928964   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:45.929009   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:45.965271   84547 cri.go:89] found id: ""
	I1210 00:01:45.965303   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.965315   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:45.965323   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:45.965392   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:45.999059   84547 cri.go:89] found id: ""
	I1210 00:01:45.999082   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.999090   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:45.999095   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:45.999140   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:46.034425   84547 cri.go:89] found id: ""
	I1210 00:01:46.034456   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.034468   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:46.034476   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:46.034529   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:46.067946   84547 cri.go:89] found id: ""
	I1210 00:01:46.067970   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.067986   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:46.067993   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:46.068056   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:46.101671   84547 cri.go:89] found id: ""
	I1210 00:01:46.101699   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.101710   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:46.101718   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:46.101783   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:46.135860   84547 cri.go:89] found id: ""
	I1210 00:01:46.135885   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.135893   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:46.135898   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:46.135948   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:46.177398   84547 cri.go:89] found id: ""
	I1210 00:01:46.177439   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.177450   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:46.177461   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:46.177476   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:46.248134   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:46.248160   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:46.248175   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:46.323652   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:46.323688   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:46.364017   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:46.364044   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:46.429480   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:46.429524   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:48.956518   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:48.970209   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:48.970294   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:49.008933   84547 cri.go:89] found id: ""
	I1210 00:01:49.008966   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.008977   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:49.008986   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:49.009050   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:49.044802   84547 cri.go:89] found id: ""
	I1210 00:01:49.044833   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.044843   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:49.044850   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:49.044921   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:49.077484   84547 cri.go:89] found id: ""
	I1210 00:01:49.077517   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.077525   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:49.077530   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:49.077576   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:49.110144   84547 cri.go:89] found id: ""
	I1210 00:01:49.110174   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.110186   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:49.110193   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:49.110255   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:49.142591   84547 cri.go:89] found id: ""
	I1210 00:01:49.142622   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.142633   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:49.142646   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:49.142709   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:49.175456   84547 cri.go:89] found id: ""
	I1210 00:01:49.175486   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.175497   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:49.175505   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:49.175603   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:49.208341   84547 cri.go:89] found id: ""
	I1210 00:01:49.208368   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.208379   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:49.208387   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:49.208445   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:49.241486   84547 cri.go:89] found id: ""
	I1210 00:01:49.241509   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.241518   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:49.241615   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:49.241647   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:49.280023   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:49.280051   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:49.328822   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:49.328858   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:49.343076   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:49.343104   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:49.418010   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:49.418037   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:49.418051   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:47.198917   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:49.697840   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:48.757230   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:51.256927   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:49.990701   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:52.490649   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:52.004053   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:52.017350   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:52.017418   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:52.055617   84547 cri.go:89] found id: ""
	I1210 00:01:52.055641   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.055648   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:52.055654   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:52.055712   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:52.088601   84547 cri.go:89] found id: ""
	I1210 00:01:52.088629   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.088637   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:52.088642   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:52.088694   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:52.121880   84547 cri.go:89] found id: ""
	I1210 00:01:52.121912   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.121922   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:52.121928   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:52.121986   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:52.161281   84547 cri.go:89] found id: ""
	I1210 00:01:52.161321   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.161334   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:52.161341   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:52.161406   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:52.222768   84547 cri.go:89] found id: ""
	I1210 00:01:52.222793   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.222800   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:52.222806   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:52.222862   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:52.271441   84547 cri.go:89] found id: ""
	I1210 00:01:52.271465   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.271473   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:52.271479   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:52.271526   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:52.311128   84547 cri.go:89] found id: ""
	I1210 00:01:52.311152   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.311160   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:52.311165   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:52.311211   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:52.348862   84547 cri.go:89] found id: ""
	I1210 00:01:52.348885   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.348892   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:52.348900   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:52.348913   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:52.401280   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:52.401324   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:52.415532   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:52.415580   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:52.484956   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:52.484979   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:52.484994   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:52.565102   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:52.565137   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:55.106446   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:55.121756   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:55.121846   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:55.158601   84547 cri.go:89] found id: ""
	I1210 00:01:55.158632   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.158643   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:55.158650   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:55.158712   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:55.192424   84547 cri.go:89] found id: ""
	I1210 00:01:55.192454   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.192464   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:55.192471   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:55.192530   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:55.226178   84547 cri.go:89] found id: ""
	I1210 00:01:55.226204   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.226213   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:55.226222   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:55.226285   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:55.264123   84547 cri.go:89] found id: ""
	I1210 00:01:55.264148   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.264161   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:55.264169   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:55.264226   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:55.302476   84547 cri.go:89] found id: ""
	I1210 00:01:55.302503   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.302512   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:55.302520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:55.302597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:55.342308   84547 cri.go:89] found id: ""
	I1210 00:01:55.342341   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.342352   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:55.342360   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:55.342419   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:55.382203   84547 cri.go:89] found id: ""
	I1210 00:01:55.382226   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.382232   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:55.382238   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:55.382286   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:55.421381   84547 cri.go:89] found id: ""
	I1210 00:01:55.421409   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.421421   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:55.421432   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:55.421449   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:55.473758   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:55.473793   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:55.488138   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:55.488166   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:01:52.198005   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:54.697589   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:56.697748   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:53.756310   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:55.756964   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:54.990271   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:57.490303   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	W1210 00:01:55.567216   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:55.567240   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:55.567251   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:55.648276   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:55.648319   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:58.185245   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:58.199015   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:58.199091   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:58.232327   84547 cri.go:89] found id: ""
	I1210 00:01:58.232352   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.232360   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:58.232368   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:58.232436   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:58.270312   84547 cri.go:89] found id: ""
	I1210 00:01:58.270342   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.270353   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:58.270360   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:58.270420   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:58.303389   84547 cri.go:89] found id: ""
	I1210 00:01:58.303415   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.303422   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:58.303427   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:58.303486   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:58.338703   84547 cri.go:89] found id: ""
	I1210 00:01:58.338735   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.338747   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:58.338755   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:58.338817   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:58.376727   84547 cri.go:89] found id: ""
	I1210 00:01:58.376759   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.376770   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:58.376779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:58.376841   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:58.410192   84547 cri.go:89] found id: ""
	I1210 00:01:58.410217   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.410224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:58.410230   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:58.410286   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:58.449772   84547 cri.go:89] found id: ""
	I1210 00:01:58.449794   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.449802   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:58.449807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:58.449859   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:58.484285   84547 cri.go:89] found id: ""
	I1210 00:01:58.484316   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.484328   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:58.484339   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:58.484356   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:58.538402   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:58.538438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:58.551361   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:58.551391   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:58.613809   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:58.613836   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:58.613850   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:58.689606   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:58.689640   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:58.698283   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:01.197599   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:57.758229   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:00.256339   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:59.490413   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:01.491035   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:03.990725   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:01.230924   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:01.244878   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:01.244990   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:01.280475   84547 cri.go:89] found id: ""
	I1210 00:02:01.280501   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.280509   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:01.280514   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:01.280561   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:01.323042   84547 cri.go:89] found id: ""
	I1210 00:02:01.323065   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.323071   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:01.323077   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:01.323124   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:01.356146   84547 cri.go:89] found id: ""
	I1210 00:02:01.356171   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.356181   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:01.356190   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:01.356247   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:01.392542   84547 cri.go:89] found id: ""
	I1210 00:02:01.392567   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.392577   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:01.392584   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:01.392721   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:01.426232   84547 cri.go:89] found id: ""
	I1210 00:02:01.426257   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.426268   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:01.426275   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:01.426341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:01.459754   84547 cri.go:89] found id: ""
	I1210 00:02:01.459786   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.459798   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:01.459806   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:01.459865   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:01.492406   84547 cri.go:89] found id: ""
	I1210 00:02:01.492435   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.492445   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:01.492450   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:01.492499   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:01.532012   84547 cri.go:89] found id: ""
	I1210 00:02:01.532034   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.532042   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:01.532049   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:01.532060   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:01.583145   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:01.583181   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:01.596910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:01.596939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:01.670480   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:01.670506   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:01.670534   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:01.748001   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:01.748041   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:04.291065   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:04.304507   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:04.304587   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:04.341673   84547 cri.go:89] found id: ""
	I1210 00:02:04.341700   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.341713   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:04.341720   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:04.341772   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:04.379743   84547 cri.go:89] found id: ""
	I1210 00:02:04.379776   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.379787   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:04.379795   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:04.379856   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:04.413532   84547 cri.go:89] found id: ""
	I1210 00:02:04.413562   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.413573   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:04.413588   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:04.413648   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:04.449192   84547 cri.go:89] found id: ""
	I1210 00:02:04.449221   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.449231   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:04.449238   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:04.449324   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:04.484638   84547 cri.go:89] found id: ""
	I1210 00:02:04.484666   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.484677   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:04.484686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:04.484745   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:04.524854   84547 cri.go:89] found id: ""
	I1210 00:02:04.524889   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.524903   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:04.524912   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:04.524976   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:04.564697   84547 cri.go:89] found id: ""
	I1210 00:02:04.564726   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.564737   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:04.564748   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:04.564797   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:04.603506   84547 cri.go:89] found id: ""
	I1210 00:02:04.603534   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.603544   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:04.603554   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:04.603583   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:04.653025   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:04.653062   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:04.666833   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:04.666878   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:04.745491   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:04.745513   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:04.745526   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:04.825267   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:04.825304   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:03.702878   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:06.197834   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:02.755541   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:04.757334   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:07.256160   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:06.490491   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:08.491144   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:07.365419   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:07.378968   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:07.379030   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:07.412830   84547 cri.go:89] found id: ""
	I1210 00:02:07.412859   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.412868   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:07.412873   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:07.412938   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:07.445622   84547 cri.go:89] found id: ""
	I1210 00:02:07.445661   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.445674   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:07.445682   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:07.445742   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:07.480431   84547 cri.go:89] found id: ""
	I1210 00:02:07.480466   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.480474   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:07.480480   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:07.480533   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:07.518741   84547 cri.go:89] found id: ""
	I1210 00:02:07.518776   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.518790   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:07.518797   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:07.518860   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:07.552193   84547 cri.go:89] found id: ""
	I1210 00:02:07.552216   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.552223   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:07.552229   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:07.552275   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:07.585762   84547 cri.go:89] found id: ""
	I1210 00:02:07.585784   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.585792   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:07.585798   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:07.585843   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:07.618619   84547 cri.go:89] found id: ""
	I1210 00:02:07.618645   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.618653   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:07.618659   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:07.618709   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:07.653377   84547 cri.go:89] found id: ""
	I1210 00:02:07.653418   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.653428   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:07.653440   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:07.653456   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:07.709366   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:07.709401   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:07.723762   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:07.723792   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:07.804849   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:07.804869   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:07.804886   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:07.887117   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:07.887154   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:10.423120   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:10.436563   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:10.436628   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:10.470622   84547 cri.go:89] found id: ""
	I1210 00:02:10.470650   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.470658   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:10.470664   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:10.470735   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:10.506211   84547 cri.go:89] found id: ""
	I1210 00:02:10.506238   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.506250   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:10.506257   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:10.506368   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:08.198217   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:10.699364   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:09.256492   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:11.256897   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:10.990662   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:13.491635   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:10.541846   84547 cri.go:89] found id: ""
	I1210 00:02:10.541871   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.541879   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:10.541885   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:10.541952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:10.581391   84547 cri.go:89] found id: ""
	I1210 00:02:10.581416   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.581427   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:10.581435   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:10.581503   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:10.615172   84547 cri.go:89] found id: ""
	I1210 00:02:10.615206   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.615216   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:10.615223   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:10.615289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:10.650791   84547 cri.go:89] found id: ""
	I1210 00:02:10.650813   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.650821   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:10.650826   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:10.650876   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:10.685428   84547 cri.go:89] found id: ""
	I1210 00:02:10.685452   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.685460   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:10.685466   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:10.685524   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:10.719139   84547 cri.go:89] found id: ""
	I1210 00:02:10.719174   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.719186   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:10.719196   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:10.719211   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:10.732045   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:10.732073   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:10.805084   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:10.805111   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:10.805127   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:10.888301   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:10.888337   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:10.926005   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:10.926033   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:13.479317   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:13.494021   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:13.494089   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:13.527753   84547 cri.go:89] found id: ""
	I1210 00:02:13.527787   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.527799   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:13.527806   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:13.527862   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:13.560574   84547 cri.go:89] found id: ""
	I1210 00:02:13.560607   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.560618   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:13.560625   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:13.560688   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:13.595507   84547 cri.go:89] found id: ""
	I1210 00:02:13.595551   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.595584   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:13.595592   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:13.595657   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:13.629848   84547 cri.go:89] found id: ""
	I1210 00:02:13.629873   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.629884   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:13.629892   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:13.629952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:13.662407   84547 cri.go:89] found id: ""
	I1210 00:02:13.662436   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.662447   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:13.662454   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:13.662509   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:13.694892   84547 cri.go:89] found id: ""
	I1210 00:02:13.694921   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.694940   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:13.694949   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:13.695013   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:13.733248   84547 cri.go:89] found id: ""
	I1210 00:02:13.733327   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.733349   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:13.733358   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:13.733426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:13.771853   84547 cri.go:89] found id: ""
	I1210 00:02:13.771884   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.771894   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:13.771906   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:13.771920   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:13.846886   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:13.846913   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:13.846928   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:13.929722   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:13.929758   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:13.968401   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:13.968427   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:14.019770   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:14.019811   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:13.197726   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:15.198437   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:13.257271   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:15.755750   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:15.990729   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:18.490851   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:16.532794   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:16.547084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:16.547172   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:16.584046   84547 cri.go:89] found id: ""
	I1210 00:02:16.584073   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.584084   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:16.584091   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:16.584150   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:16.615981   84547 cri.go:89] found id: ""
	I1210 00:02:16.616012   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.616023   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:16.616030   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:16.616094   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:16.647939   84547 cri.go:89] found id: ""
	I1210 00:02:16.647967   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.647979   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:16.647986   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:16.648048   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:16.682585   84547 cri.go:89] found id: ""
	I1210 00:02:16.682620   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.682632   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:16.682640   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:16.682695   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:16.718593   84547 cri.go:89] found id: ""
	I1210 00:02:16.718620   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.718628   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:16.718634   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:16.718687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:16.752513   84547 cri.go:89] found id: ""
	I1210 00:02:16.752536   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.752543   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:16.752549   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:16.752598   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:16.787679   84547 cri.go:89] found id: ""
	I1210 00:02:16.787702   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.787710   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:16.787715   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:16.787777   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:16.821275   84547 cri.go:89] found id: ""
	I1210 00:02:16.821297   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.821305   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:16.821312   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:16.821322   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:16.872500   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:16.872533   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:16.885185   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:16.885212   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:16.962658   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:16.962679   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:16.962694   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:17.039689   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:17.039726   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:19.578060   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:19.590601   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:19.590675   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:19.622145   84547 cri.go:89] found id: ""
	I1210 00:02:19.622170   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.622179   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:19.622184   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:19.622231   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:19.653204   84547 cri.go:89] found id: ""
	I1210 00:02:19.653232   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.653243   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:19.653250   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:19.653317   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:19.685104   84547 cri.go:89] found id: ""
	I1210 00:02:19.685137   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.685148   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:19.685156   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:19.685213   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:19.719087   84547 cri.go:89] found id: ""
	I1210 00:02:19.719113   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.719121   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:19.719126   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:19.719176   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:19.753210   84547 cri.go:89] found id: ""
	I1210 00:02:19.753239   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.753250   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:19.753258   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:19.753317   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:19.787608   84547 cri.go:89] found id: ""
	I1210 00:02:19.787635   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.787645   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:19.787653   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:19.787718   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:19.823111   84547 cri.go:89] found id: ""
	I1210 00:02:19.823142   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.823154   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:19.823161   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:19.823221   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:19.858274   84547 cri.go:89] found id: ""
	I1210 00:02:19.858301   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.858312   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:19.858323   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:19.858344   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:19.905386   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:19.905420   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:19.918995   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:19.919026   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:19.990676   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:19.990700   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:19.990716   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:20.064396   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:20.064435   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:17.698109   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:20.197963   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:17.756633   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:19.757487   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:22.257036   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:20.990682   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:23.490383   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:22.604477   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:22.617408   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:22.617487   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:22.650144   84547 cri.go:89] found id: ""
	I1210 00:02:22.650178   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.650189   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:22.650197   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:22.650293   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:22.684342   84547 cri.go:89] found id: ""
	I1210 00:02:22.684367   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.684375   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:22.684380   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:22.684429   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:22.718168   84547 cri.go:89] found id: ""
	I1210 00:02:22.718194   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.718204   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:22.718211   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:22.718271   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:22.755192   84547 cri.go:89] found id: ""
	I1210 00:02:22.755222   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.755232   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:22.755240   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:22.755297   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:22.793095   84547 cri.go:89] found id: ""
	I1210 00:02:22.793129   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.793141   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:22.793149   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:22.793209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:22.831116   84547 cri.go:89] found id: ""
	I1210 00:02:22.831146   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.831157   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:22.831164   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:22.831235   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:22.869652   84547 cri.go:89] found id: ""
	I1210 00:02:22.869686   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.869704   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:22.869709   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:22.869756   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:22.907480   84547 cri.go:89] found id: ""
	I1210 00:02:22.907504   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.907513   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:22.907520   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:22.907533   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:22.983880   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:22.983902   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:22.983915   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:23.062840   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:23.062880   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:23.101427   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:23.101476   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:23.153861   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:23.153893   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:22.198638   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:24.698708   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:24.757526   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:27.256296   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:25.491835   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:27.990755   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:25.666908   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:25.680047   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:25.680125   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:25.715821   84547 cri.go:89] found id: ""
	I1210 00:02:25.715853   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.715865   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:25.715873   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:25.715931   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:25.748191   84547 cri.go:89] found id: ""
	I1210 00:02:25.748222   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.748232   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:25.748239   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:25.748295   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:25.786474   84547 cri.go:89] found id: ""
	I1210 00:02:25.786498   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.786505   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:25.786511   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:25.786569   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:25.819282   84547 cri.go:89] found id: ""
	I1210 00:02:25.819319   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.819330   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:25.819337   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:25.819400   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:25.858064   84547 cri.go:89] found id: ""
	I1210 00:02:25.858091   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.858100   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:25.858106   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:25.858169   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:25.893335   84547 cri.go:89] found id: ""
	I1210 00:02:25.893362   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.893373   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:25.893380   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:25.893439   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:25.927225   84547 cri.go:89] found id: ""
	I1210 00:02:25.927254   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.927265   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:25.927272   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:25.927341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:25.963678   84547 cri.go:89] found id: ""
	I1210 00:02:25.963715   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.963725   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:25.963738   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:25.963756   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:25.994462   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:25.994488   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:26.061394   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:26.061442   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:26.061458   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:26.135152   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:26.135187   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:26.171961   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:26.171994   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:28.723326   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:28.735721   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:28.735787   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:28.769493   84547 cri.go:89] found id: ""
	I1210 00:02:28.769516   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.769525   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:28.769530   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:28.769582   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:28.805594   84547 cri.go:89] found id: ""
	I1210 00:02:28.805641   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.805652   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:28.805658   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:28.805704   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:28.838982   84547 cri.go:89] found id: ""
	I1210 00:02:28.839007   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.839015   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:28.839020   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:28.839072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:28.876607   84547 cri.go:89] found id: ""
	I1210 00:02:28.876627   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.876635   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:28.876641   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:28.876700   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:28.910352   84547 cri.go:89] found id: ""
	I1210 00:02:28.910379   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.910386   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:28.910392   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:28.910464   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:28.943896   84547 cri.go:89] found id: ""
	I1210 00:02:28.943917   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.943924   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:28.943930   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:28.943978   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:28.976554   84547 cri.go:89] found id: ""
	I1210 00:02:28.976583   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.976590   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:28.976596   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:28.976644   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:29.011334   84547 cri.go:89] found id: ""
	I1210 00:02:29.011357   84547 logs.go:282] 0 containers: []
	W1210 00:02:29.011364   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:29.011372   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:29.011384   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:29.061418   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:29.061464   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:29.074887   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:29.074913   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:29.147240   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:29.147261   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:29.147272   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:29.225058   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:29.225094   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:27.197730   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:29.197818   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:31.198788   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:29.256802   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:31.258194   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:30.490822   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:32.990271   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:31.763432   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:31.777257   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:31.777359   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:31.812558   84547 cri.go:89] found id: ""
	I1210 00:02:31.812588   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.812598   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:31.812606   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:31.812668   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:31.847042   84547 cri.go:89] found id: ""
	I1210 00:02:31.847065   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.847082   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:31.847088   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:31.847135   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:31.881182   84547 cri.go:89] found id: ""
	I1210 00:02:31.881208   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.881216   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:31.881221   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:31.881272   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:31.917367   84547 cri.go:89] found id: ""
	I1210 00:02:31.917393   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.917401   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:31.917407   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:31.917454   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:31.956836   84547 cri.go:89] found id: ""
	I1210 00:02:31.956868   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.956883   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:31.956893   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:31.956952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:31.993125   84547 cri.go:89] found id: ""
	I1210 00:02:31.993151   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.993160   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:31.993168   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:31.993225   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:32.031648   84547 cri.go:89] found id: ""
	I1210 00:02:32.031679   84547 logs.go:282] 0 containers: []
	W1210 00:02:32.031687   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:32.031692   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:32.031746   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:32.065894   84547 cri.go:89] found id: ""
	I1210 00:02:32.065923   84547 logs.go:282] 0 containers: []
	W1210 00:02:32.065930   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:32.065941   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:32.065957   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:32.133473   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:32.133496   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:32.133508   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:32.213129   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:32.213161   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:32.251424   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:32.251453   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:32.302284   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:32.302323   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:34.815963   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:34.829460   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:34.829543   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:34.865811   84547 cri.go:89] found id: ""
	I1210 00:02:34.865837   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.865847   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:34.865854   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:34.865916   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:34.899185   84547 cri.go:89] found id: ""
	I1210 00:02:34.899211   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.899220   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:34.899227   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:34.899289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:34.932472   84547 cri.go:89] found id: ""
	I1210 00:02:34.932500   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.932509   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:34.932517   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:34.932581   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:34.965817   84547 cri.go:89] found id: ""
	I1210 00:02:34.965846   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.965857   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:34.965866   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:34.965930   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:35.000036   84547 cri.go:89] found id: ""
	I1210 00:02:35.000066   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.000077   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:35.000084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:35.000139   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:35.033808   84547 cri.go:89] found id: ""
	I1210 00:02:35.033839   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.033850   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:35.033857   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:35.033916   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:35.066242   84547 cri.go:89] found id: ""
	I1210 00:02:35.066269   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.066278   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:35.066285   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:35.066349   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:35.100740   84547 cri.go:89] found id: ""
	I1210 00:02:35.100763   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.100771   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:35.100779   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:35.100792   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:35.155483   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:35.155520   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:35.168910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:35.168939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:35.243234   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:35.243252   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:35.243263   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:35.320622   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:35.320657   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:33.698543   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:36.198250   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:33.756108   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:35.757682   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:34.990969   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:37.495166   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:37.855684   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:37.869056   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:37.869156   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:37.903720   84547 cri.go:89] found id: ""
	I1210 00:02:37.903748   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.903759   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:37.903766   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:37.903859   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:37.937741   84547 cri.go:89] found id: ""
	I1210 00:02:37.937780   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.937791   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:37.937808   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:37.937869   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:37.975255   84547 cri.go:89] found id: ""
	I1210 00:02:37.975281   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.975292   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:37.975299   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:37.975359   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:38.017348   84547 cri.go:89] found id: ""
	I1210 00:02:38.017381   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.017393   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:38.017400   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:38.017460   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:38.051039   84547 cri.go:89] found id: ""
	I1210 00:02:38.051065   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.051073   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:38.051079   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:38.051129   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:38.085752   84547 cri.go:89] found id: ""
	I1210 00:02:38.085782   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.085791   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:38.085799   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:38.085858   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:38.119975   84547 cri.go:89] found id: ""
	I1210 00:02:38.120004   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.120014   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:38.120021   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:38.120086   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:38.161467   84547 cri.go:89] found id: ""
	I1210 00:02:38.161499   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.161526   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:38.161537   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:38.161551   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:38.222277   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:38.222314   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:38.239300   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:38.239332   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:38.308997   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:38.309016   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:38.309032   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:38.394064   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:38.394108   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:38.198484   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:40.697916   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:38.257723   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:40.756530   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:39.990445   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:41.993952   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:40.933406   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:40.945862   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:40.945937   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:40.979503   84547 cri.go:89] found id: ""
	I1210 00:02:40.979532   84547 logs.go:282] 0 containers: []
	W1210 00:02:40.979540   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:40.979545   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:40.979619   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:41.016758   84547 cri.go:89] found id: ""
	I1210 00:02:41.016792   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.016803   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:41.016811   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:41.016873   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:41.053562   84547 cri.go:89] found id: ""
	I1210 00:02:41.053593   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.053601   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:41.053607   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:41.053667   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:41.086713   84547 cri.go:89] found id: ""
	I1210 00:02:41.086745   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.086757   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:41.086767   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:41.086830   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:41.122901   84547 cri.go:89] found id: ""
	I1210 00:02:41.122935   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.122945   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:41.122952   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:41.123011   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:41.156306   84547 cri.go:89] found id: ""
	I1210 00:02:41.156337   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.156355   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:41.156362   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:41.156423   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:41.189840   84547 cri.go:89] found id: ""
	I1210 00:02:41.189871   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.189882   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:41.189890   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:41.189946   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:41.223019   84547 cri.go:89] found id: ""
	I1210 00:02:41.223051   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.223061   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:41.223072   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:41.223088   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:41.275608   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:41.275640   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:41.289181   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:41.289210   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:41.358375   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:41.358404   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:41.358420   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:41.440214   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:41.440250   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:43.980600   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:43.993110   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:43.993165   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:44.026688   84547 cri.go:89] found id: ""
	I1210 00:02:44.026721   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.026732   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:44.026741   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:44.026796   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:44.062914   84547 cri.go:89] found id: ""
	I1210 00:02:44.062936   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.062943   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:44.062948   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:44.062999   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:44.105974   84547 cri.go:89] found id: ""
	I1210 00:02:44.106001   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.106009   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:44.106014   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:44.106061   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:44.140239   84547 cri.go:89] found id: ""
	I1210 00:02:44.140265   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.140274   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:44.140280   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:44.140338   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:44.175754   84547 cri.go:89] found id: ""
	I1210 00:02:44.175785   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.175796   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:44.175803   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:44.175870   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:44.211663   84547 cri.go:89] found id: ""
	I1210 00:02:44.211694   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.211705   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:44.211712   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:44.211776   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:44.244796   84547 cri.go:89] found id: ""
	I1210 00:02:44.244821   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.244831   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:44.244837   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:44.244898   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:44.282491   84547 cri.go:89] found id: ""
	I1210 00:02:44.282515   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.282528   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:44.282549   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:44.282562   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:44.335284   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:44.335328   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:44.349489   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:44.349530   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:44.418643   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:44.418668   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:44.418682   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:44.493901   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:44.493932   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:43.197494   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:45.198341   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:43.256947   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:45.257225   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:47.258073   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:44.491413   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:46.990521   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:48.991847   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:47.033132   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:47.046322   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:47.046403   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:47.078490   84547 cri.go:89] found id: ""
	I1210 00:02:47.078521   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.078533   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:47.078541   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:47.078602   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:47.111457   84547 cri.go:89] found id: ""
	I1210 00:02:47.111479   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.111487   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:47.111492   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:47.111538   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:47.144643   84547 cri.go:89] found id: ""
	I1210 00:02:47.144678   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.144689   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:47.144696   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:47.144757   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:47.178106   84547 cri.go:89] found id: ""
	I1210 00:02:47.178131   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.178141   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:47.178148   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:47.178213   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:47.215670   84547 cri.go:89] found id: ""
	I1210 00:02:47.215697   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.215712   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:47.215718   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:47.215767   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:47.248916   84547 cri.go:89] found id: ""
	I1210 00:02:47.248941   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.248948   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:47.248953   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:47.249002   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:47.287632   84547 cri.go:89] found id: ""
	I1210 00:02:47.287660   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.287671   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:47.287680   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:47.287745   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:47.327064   84547 cri.go:89] found id: ""
	I1210 00:02:47.327094   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.327103   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:47.327112   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:47.327126   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:47.341132   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:47.341176   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:47.417100   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:47.417121   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:47.417134   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:47.502612   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:47.502648   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:47.541312   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:47.541339   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:50.095403   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:50.108145   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:50.108202   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:50.140424   84547 cri.go:89] found id: ""
	I1210 00:02:50.140451   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.140462   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:50.140472   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:50.140532   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:50.173836   84547 cri.go:89] found id: ""
	I1210 00:02:50.173859   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.173866   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:50.173872   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:50.173928   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:50.213916   84547 cri.go:89] found id: ""
	I1210 00:02:50.213937   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.213944   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:50.213949   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:50.213997   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:50.246854   84547 cri.go:89] found id: ""
	I1210 00:02:50.246889   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.246899   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:50.246907   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:50.246956   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:50.281416   84547 cri.go:89] found id: ""
	I1210 00:02:50.281448   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.281456   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:50.281462   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:50.281511   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:50.313263   84547 cri.go:89] found id: ""
	I1210 00:02:50.313296   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.313308   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:50.313318   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:50.313385   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:50.347421   84547 cri.go:89] found id: ""
	I1210 00:02:50.347453   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.347463   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:50.347470   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:50.347544   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:50.383111   84547 cri.go:89] found id: ""
	I1210 00:02:50.383134   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.383142   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:50.383151   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:50.383162   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:50.421982   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:50.422013   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:50.475478   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:50.475520   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:50.489202   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:50.489256   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:02:47.199043   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:49.698055   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:51.698813   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:49.756585   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:51.757407   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:51.489831   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:53.490572   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	W1210 00:02:50.559501   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:50.559539   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:50.559552   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:53.136042   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:53.149149   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:53.149227   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:53.189323   84547 cri.go:89] found id: ""
	I1210 00:02:53.189349   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.189357   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:53.189365   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:53.189425   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:53.225235   84547 cri.go:89] found id: ""
	I1210 00:02:53.225269   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.225281   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:53.225288   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:53.225347   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:53.263465   84547 cri.go:89] found id: ""
	I1210 00:02:53.263492   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.263502   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:53.263510   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:53.263597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:53.297544   84547 cri.go:89] found id: ""
	I1210 00:02:53.297571   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.297583   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:53.297591   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:53.297656   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:53.331731   84547 cri.go:89] found id: ""
	I1210 00:02:53.331755   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.331762   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:53.331767   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:53.331815   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:53.367395   84547 cri.go:89] found id: ""
	I1210 00:02:53.367427   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.367440   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:53.367447   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:53.367511   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:53.403297   84547 cri.go:89] found id: ""
	I1210 00:02:53.403324   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.403332   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:53.403338   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:53.403398   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:53.437126   84547 cri.go:89] found id: ""
	I1210 00:02:53.437150   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.437158   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:53.437166   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:53.437177   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:53.489875   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:53.489914   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:53.503915   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:53.503940   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:53.578086   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:53.578114   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:53.578127   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:53.658463   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:53.658501   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:53.699332   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:56.197941   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:54.257053   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:56.258352   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:55.990548   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:58.489947   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:56.197093   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:56.211959   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:56.212020   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:56.250462   84547 cri.go:89] found id: ""
	I1210 00:02:56.250484   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.250493   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:56.250498   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:56.250552   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:56.286828   84547 cri.go:89] found id: ""
	I1210 00:02:56.286854   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.286865   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:56.286872   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:56.286939   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:56.320750   84547 cri.go:89] found id: ""
	I1210 00:02:56.320779   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.320787   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:56.320793   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:56.320840   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:56.355903   84547 cri.go:89] found id: ""
	I1210 00:02:56.355943   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.355954   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:56.355960   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:56.356026   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:56.395979   84547 cri.go:89] found id: ""
	I1210 00:02:56.396007   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.396018   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:56.396025   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:56.396081   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:56.431481   84547 cri.go:89] found id: ""
	I1210 00:02:56.431506   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.431514   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:56.431520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:56.431594   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:56.470318   84547 cri.go:89] found id: ""
	I1210 00:02:56.470349   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.470356   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:56.470361   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:56.470410   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:56.506388   84547 cri.go:89] found id: ""
	I1210 00:02:56.506411   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.506418   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:56.506429   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:56.506441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:56.558438   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:56.558483   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:56.571981   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:56.572004   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:56.634665   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:56.634697   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:56.634713   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:56.715663   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:56.715697   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:59.255305   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:59.268846   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:59.268930   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:59.302334   84547 cri.go:89] found id: ""
	I1210 00:02:59.302359   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.302366   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:59.302372   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:59.302426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:59.336380   84547 cri.go:89] found id: ""
	I1210 00:02:59.336409   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.336419   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:59.336425   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:59.336492   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:59.370179   84547 cri.go:89] found id: ""
	I1210 00:02:59.370201   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.370210   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:59.370214   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:59.370268   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:59.403199   84547 cri.go:89] found id: ""
	I1210 00:02:59.403222   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.403229   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:59.403236   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:59.403307   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:59.438650   84547 cri.go:89] found id: ""
	I1210 00:02:59.438673   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.438681   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:59.438686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:59.438736   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:59.473166   84547 cri.go:89] found id: ""
	I1210 00:02:59.473191   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.473199   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:59.473205   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:59.473264   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:59.506857   84547 cri.go:89] found id: ""
	I1210 00:02:59.506879   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.506888   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:59.506902   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:59.506963   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:59.551488   84547 cri.go:89] found id: ""
	I1210 00:02:59.551515   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.551526   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:59.551542   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:59.551557   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:59.605032   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:59.605069   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:59.619238   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:59.619271   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:59.690772   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:59.690798   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:59.690813   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:59.774424   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:59.774460   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:58.198128   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:00.698106   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:58.756596   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:01.257970   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:00.490268   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:02.990951   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:02.315240   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:02.329636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:02.329728   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:02.368571   84547 cri.go:89] found id: ""
	I1210 00:03:02.368599   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.368609   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:02.368621   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:02.368687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:02.406107   84547 cri.go:89] found id: ""
	I1210 00:03:02.406136   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.406148   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:02.406155   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:02.406219   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:02.443050   84547 cri.go:89] found id: ""
	I1210 00:03:02.443077   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.443091   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:02.443098   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:02.443146   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:02.484425   84547 cri.go:89] found id: ""
	I1210 00:03:02.484451   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.484461   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:02.484469   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:02.484536   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:02.525624   84547 cri.go:89] found id: ""
	I1210 00:03:02.525647   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.525655   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:02.525661   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:02.525711   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:02.564808   84547 cri.go:89] found id: ""
	I1210 00:03:02.564839   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.564850   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:02.564856   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:02.564907   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:02.598322   84547 cri.go:89] found id: ""
	I1210 00:03:02.598346   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.598354   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:02.598359   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:02.598417   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:02.632256   84547 cri.go:89] found id: ""
	I1210 00:03:02.632310   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.632322   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:02.632334   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:02.632348   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:02.686025   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:02.686063   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:02.701214   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:02.701237   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:02.773453   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:02.773477   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:02.773491   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:02.862017   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:02.862068   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:05.401014   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:05.415009   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:05.415072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:05.454916   84547 cri.go:89] found id: ""
	I1210 00:03:05.454945   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.454955   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:05.454962   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:05.455023   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:05.492955   84547 cri.go:89] found id: ""
	I1210 00:03:05.492981   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.492988   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:05.492995   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:05.493046   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:02.698652   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.196840   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:03.755858   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.757395   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.489875   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:07.990053   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.539646   84547 cri.go:89] found id: ""
	I1210 00:03:05.539670   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.539683   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:05.539690   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:05.539755   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:05.574534   84547 cri.go:89] found id: ""
	I1210 00:03:05.574559   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.574567   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:05.574572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:05.574632   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:05.608141   84547 cri.go:89] found id: ""
	I1210 00:03:05.608166   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.608174   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:05.608180   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:05.608243   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:05.643725   84547 cri.go:89] found id: ""
	I1210 00:03:05.643751   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.643759   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:05.643765   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:05.643812   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:05.677730   84547 cri.go:89] found id: ""
	I1210 00:03:05.677760   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.677772   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:05.677779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:05.677846   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:05.712475   84547 cri.go:89] found id: ""
	I1210 00:03:05.712507   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.712519   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:05.712531   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:05.712548   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:05.764532   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:05.764569   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:05.777910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:05.777938   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:05.851070   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:05.851099   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:05.851114   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:05.929518   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:05.929554   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:08.465166   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:08.478666   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:08.478723   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:08.513763   84547 cri.go:89] found id: ""
	I1210 00:03:08.513795   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.513808   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:08.513816   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:08.513877   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:08.548272   84547 cri.go:89] found id: ""
	I1210 00:03:08.548299   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.548306   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:08.548313   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:08.548397   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:08.581773   84547 cri.go:89] found id: ""
	I1210 00:03:08.581801   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.581812   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:08.581820   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:08.581884   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:08.613625   84547 cri.go:89] found id: ""
	I1210 00:03:08.613655   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.613664   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:08.613669   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:08.613716   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:08.644618   84547 cri.go:89] found id: ""
	I1210 00:03:08.644644   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.644652   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:08.644658   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:08.644717   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:08.676840   84547 cri.go:89] found id: ""
	I1210 00:03:08.676867   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.676879   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:08.676887   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:08.676952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:08.709918   84547 cri.go:89] found id: ""
	I1210 00:03:08.709952   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.709961   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:08.709969   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:08.710029   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:08.740842   84547 cri.go:89] found id: ""
	I1210 00:03:08.740871   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.740882   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:08.740893   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:08.740907   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:08.808679   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:08.808705   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:08.808721   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:08.884692   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:08.884733   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:08.920922   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:08.920952   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:08.971404   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:08.971441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:07.197548   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:09.197749   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:11.198894   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:08.258477   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:10.755482   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:10.490495   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:12.990709   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:11.484905   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:11.498791   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:11.498865   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:11.535784   84547 cri.go:89] found id: ""
	I1210 00:03:11.535809   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.535820   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:11.535827   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:11.535890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:11.567981   84547 cri.go:89] found id: ""
	I1210 00:03:11.568006   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.568019   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:11.568026   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:11.568083   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:11.601188   84547 cri.go:89] found id: ""
	I1210 00:03:11.601213   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.601224   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:11.601231   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:11.601289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:11.635759   84547 cri.go:89] found id: ""
	I1210 00:03:11.635787   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.635798   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:11.635807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:11.635867   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:11.675512   84547 cri.go:89] found id: ""
	I1210 00:03:11.675537   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.675547   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:11.675554   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:11.675638   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:11.709889   84547 cri.go:89] found id: ""
	I1210 00:03:11.709912   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.709922   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:11.709929   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:11.709987   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:11.744571   84547 cri.go:89] found id: ""
	I1210 00:03:11.744603   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.744610   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:11.744616   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:11.744677   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:11.779091   84547 cri.go:89] found id: ""
	I1210 00:03:11.779123   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.779136   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:11.779146   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:11.779156   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:11.830682   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:11.830726   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:11.844352   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:11.844383   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:11.915025   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:11.915046   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:11.915058   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:11.998581   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:11.998620   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:14.536408   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:14.550250   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:14.550321   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:14.586005   84547 cri.go:89] found id: ""
	I1210 00:03:14.586037   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.586049   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:14.586057   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:14.586118   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:14.620124   84547 cri.go:89] found id: ""
	I1210 00:03:14.620159   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.620170   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:14.620178   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:14.620231   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:14.655079   84547 cri.go:89] found id: ""
	I1210 00:03:14.655104   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.655112   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:14.655117   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:14.655178   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:14.688253   84547 cri.go:89] found id: ""
	I1210 00:03:14.688285   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.688298   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:14.688308   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:14.688371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:14.726904   84547 cri.go:89] found id: ""
	I1210 00:03:14.726932   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.726940   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:14.726945   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:14.726994   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:14.764836   84547 cri.go:89] found id: ""
	I1210 00:03:14.764868   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.764881   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:14.764890   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:14.764955   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:14.803557   84547 cri.go:89] found id: ""
	I1210 00:03:14.803605   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.803616   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:14.803621   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:14.803674   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:14.841070   84547 cri.go:89] found id: ""
	I1210 00:03:14.841102   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.841122   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:14.841137   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:14.841161   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:14.907607   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:14.907631   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:14.907644   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:14.985179   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:14.985209   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:15.022654   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:15.022687   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:15.075224   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:15.075260   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:13.697072   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:15.697892   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:12.757232   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:14.758529   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:17.256541   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:15.490871   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:17.990140   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:17.589836   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:17.604045   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:17.604102   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:17.643211   84547 cri.go:89] found id: ""
	I1210 00:03:17.643241   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.643251   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:17.643260   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:17.643320   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:17.675436   84547 cri.go:89] found id: ""
	I1210 00:03:17.675462   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.675472   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:17.675479   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:17.675538   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:17.709428   84547 cri.go:89] found id: ""
	I1210 00:03:17.709465   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.709476   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:17.709484   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:17.709544   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:17.764088   84547 cri.go:89] found id: ""
	I1210 00:03:17.764121   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.764132   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:17.764139   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:17.764200   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:17.797908   84547 cri.go:89] found id: ""
	I1210 00:03:17.797933   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.797944   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:17.797953   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:17.798016   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:17.829580   84547 cri.go:89] found id: ""
	I1210 00:03:17.829609   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.829620   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:17.829628   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:17.829687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:17.865106   84547 cri.go:89] found id: ""
	I1210 00:03:17.865136   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.865145   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:17.865150   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:17.865200   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:17.896757   84547 cri.go:89] found id: ""
	I1210 00:03:17.896791   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.896803   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:17.896814   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:17.896830   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:17.977210   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:17.977244   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:18.016843   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:18.016867   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:18.065263   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:18.065294   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:18.078188   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:18.078212   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:18.148164   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:17.698675   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:20.199732   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:19.755949   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:21.756959   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:19.990714   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:21.990766   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:23.991443   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:23.991468   83900 pod_ready.go:82] duration metric: took 4m0.007840011s for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	E1210 00:03:23.991477   83900 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1210 00:03:23.991484   83900 pod_ready.go:39] duration metric: took 4m6.196076299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:03:23.991501   83900 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:03:23.991534   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:23.991610   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:24.032198   83900 cri.go:89] found id: "07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:24.032227   83900 cri.go:89] found id: ""
	I1210 00:03:24.032237   83900 logs.go:282] 1 containers: [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7]
	I1210 00:03:24.032303   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.037671   83900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:24.037746   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:24.076467   83900 cri.go:89] found id: "c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:24.076499   83900 cri.go:89] found id: ""
	I1210 00:03:24.076507   83900 logs.go:282] 1 containers: [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9]
	I1210 00:03:24.076557   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.081125   83900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:24.081193   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:24.125433   83900 cri.go:89] found id: "db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:24.125472   83900 cri.go:89] found id: ""
	I1210 00:03:24.125483   83900 logs.go:282] 1 containers: [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71]
	I1210 00:03:24.125542   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.131023   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:24.131097   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:24.173215   83900 cri.go:89] found id: "f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:24.173239   83900 cri.go:89] found id: ""
	I1210 00:03:24.173247   83900 logs.go:282] 1 containers: [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de]
	I1210 00:03:24.173303   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.179027   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:24.179108   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:24.228905   83900 cri.go:89] found id: "a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:24.228936   83900 cri.go:89] found id: ""
	I1210 00:03:24.228946   83900 logs.go:282] 1 containers: [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994]
	I1210 00:03:24.229004   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.234441   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:24.234520   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:20.648921   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:20.661458   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:20.661516   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:20.693473   84547 cri.go:89] found id: ""
	I1210 00:03:20.693509   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.693518   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:20.693524   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:20.693576   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:20.726279   84547 cri.go:89] found id: ""
	I1210 00:03:20.726301   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.726309   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:20.726314   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:20.726375   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:20.759895   84547 cri.go:89] found id: ""
	I1210 00:03:20.759922   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.759931   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:20.759936   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:20.759988   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:20.793934   84547 cri.go:89] found id: ""
	I1210 00:03:20.793964   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.793974   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:20.793982   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:20.794049   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:20.828040   84547 cri.go:89] found id: ""
	I1210 00:03:20.828066   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.828077   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:20.828084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:20.828143   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:20.861917   84547 cri.go:89] found id: ""
	I1210 00:03:20.861950   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.861960   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:20.861967   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:20.862028   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:20.896458   84547 cri.go:89] found id: ""
	I1210 00:03:20.896481   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.896489   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:20.896494   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:20.896551   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:20.931014   84547 cri.go:89] found id: ""
	I1210 00:03:20.931044   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.931052   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:20.931061   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:20.931072   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:20.968693   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:20.968718   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:21.021880   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:21.021917   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:21.035848   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:21.035881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:21.104570   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:21.104604   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:21.104621   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:23.679447   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:23.692326   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:23.692426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:23.729773   84547 cri.go:89] found id: ""
	I1210 00:03:23.729796   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.729804   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:23.729809   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:23.729855   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:23.765875   84547 cri.go:89] found id: ""
	I1210 00:03:23.765905   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.765915   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:23.765922   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:23.765984   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:23.800785   84547 cri.go:89] found id: ""
	I1210 00:03:23.800821   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.800831   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:23.800838   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:23.800902   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:23.838137   84547 cri.go:89] found id: ""
	I1210 00:03:23.838160   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.838168   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:23.838173   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:23.838222   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:23.869916   84547 cri.go:89] found id: ""
	I1210 00:03:23.869947   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.869958   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:23.869966   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:23.870027   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:23.903939   84547 cri.go:89] found id: ""
	I1210 00:03:23.903962   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.903969   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:23.903975   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:23.904021   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:23.937092   84547 cri.go:89] found id: ""
	I1210 00:03:23.937119   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.937127   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:23.937133   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:23.937194   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:23.970562   84547 cri.go:89] found id: ""
	I1210 00:03:23.970599   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.970611   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:23.970622   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:23.970641   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:23.983364   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:23.983394   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:24.066624   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:24.066645   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:24.066657   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:24.151466   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:24.151502   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:24.204590   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:24.204615   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:22.698354   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:24.698462   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:24.256922   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:26.755965   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:24.279840   83900 cri.go:89] found id: "a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:24.279859   83900 cri.go:89] found id: ""
	I1210 00:03:24.279866   83900 logs.go:282] 1 containers: [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538]
	I1210 00:03:24.279922   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.283898   83900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:24.283972   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:24.319541   83900 cri.go:89] found id: ""
	I1210 00:03:24.319598   83900 logs.go:282] 0 containers: []
	W1210 00:03:24.319612   83900 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:24.319624   83900 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:03:24.319686   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:03:24.356205   83900 cri.go:89] found id: "b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:24.356228   83900 cri.go:89] found id: "e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:24.356234   83900 cri.go:89] found id: ""
	I1210 00:03:24.356242   83900 logs.go:282] 2 containers: [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1]
	I1210 00:03:24.356302   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.361772   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.366656   83900 logs.go:123] Gathering logs for kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] ...
	I1210 00:03:24.366690   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:24.408922   83900 logs.go:123] Gathering logs for kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] ...
	I1210 00:03:24.408955   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:24.466720   83900 logs.go:123] Gathering logs for storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] ...
	I1210 00:03:24.466762   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:24.500254   83900 logs.go:123] Gathering logs for storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] ...
	I1210 00:03:24.500290   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:24.535025   83900 logs.go:123] Gathering logs for container status ...
	I1210 00:03:24.535058   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:24.575706   83900 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:24.575732   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:24.645227   83900 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:24.645265   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:24.658833   83900 logs.go:123] Gathering logs for kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] ...
	I1210 00:03:24.658878   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:24.702076   83900 logs.go:123] Gathering logs for kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] ...
	I1210 00:03:24.702107   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:24.738869   83900 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:24.738900   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:25.204715   83900 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:25.204753   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:03:25.320267   83900 logs.go:123] Gathering logs for etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] ...
	I1210 00:03:25.320315   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:25.370599   83900 logs.go:123] Gathering logs for coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] ...
	I1210 00:03:25.370635   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:27.910863   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:27.925070   83900 api_server.go:72] duration metric: took 4m17.856306845s to wait for apiserver process to appear ...
	I1210 00:03:27.925098   83900 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:03:27.925164   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:27.925227   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:27.959991   83900 cri.go:89] found id: "07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:27.960017   83900 cri.go:89] found id: ""
	I1210 00:03:27.960026   83900 logs.go:282] 1 containers: [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7]
	I1210 00:03:27.960081   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:27.964069   83900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:27.964128   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:27.997618   83900 cri.go:89] found id: "c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:27.997640   83900 cri.go:89] found id: ""
	I1210 00:03:27.997650   83900 logs.go:282] 1 containers: [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9]
	I1210 00:03:27.997710   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.001497   83900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:28.001570   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:28.034858   83900 cri.go:89] found id: "db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:28.034883   83900 cri.go:89] found id: ""
	I1210 00:03:28.034891   83900 logs.go:282] 1 containers: [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71]
	I1210 00:03:28.034953   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.038775   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:28.038852   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:28.074473   83900 cri.go:89] found id: "f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:28.074497   83900 cri.go:89] found id: ""
	I1210 00:03:28.074506   83900 logs.go:282] 1 containers: [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de]
	I1210 00:03:28.074568   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.079082   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:28.079149   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:28.113948   83900 cri.go:89] found id: "a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:28.113971   83900 cri.go:89] found id: ""
	I1210 00:03:28.113981   83900 logs.go:282] 1 containers: [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994]
	I1210 00:03:28.114045   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.118930   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:28.118982   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:28.162794   83900 cri.go:89] found id: "a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:28.162820   83900 cri.go:89] found id: ""
	I1210 00:03:28.162827   83900 logs.go:282] 1 containers: [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538]
	I1210 00:03:28.162872   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.166637   83900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:28.166715   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:28.206591   83900 cri.go:89] found id: ""
	I1210 00:03:28.206619   83900 logs.go:282] 0 containers: []
	W1210 00:03:28.206627   83900 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:28.206633   83900 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:03:28.206693   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:03:28.251722   83900 cri.go:89] found id: "b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:28.251748   83900 cri.go:89] found id: "e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:28.251754   83900 cri.go:89] found id: ""
	I1210 00:03:28.251763   83900 logs.go:282] 2 containers: [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1]
	I1210 00:03:28.251821   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.256785   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.260355   83900 logs.go:123] Gathering logs for storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] ...
	I1210 00:03:28.260378   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:28.295801   83900 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:28.295827   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:28.734920   83900 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:28.734963   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:28.804266   83900 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:28.804305   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:28.818223   83900 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:28.818251   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:03:28.931008   83900 logs.go:123] Gathering logs for etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] ...
	I1210 00:03:28.931044   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:28.977544   83900 logs.go:123] Gathering logs for coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] ...
	I1210 00:03:28.977592   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:29.016645   83900 logs.go:123] Gathering logs for kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] ...
	I1210 00:03:29.016679   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:29.052920   83900 logs.go:123] Gathering logs for kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] ...
	I1210 00:03:29.052945   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:29.094464   83900 logs.go:123] Gathering logs for kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] ...
	I1210 00:03:29.094497   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:29.132520   83900 logs.go:123] Gathering logs for kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] ...
	I1210 00:03:29.132545   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:29.180364   83900 logs.go:123] Gathering logs for storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] ...
	I1210 00:03:29.180396   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:29.216627   83900 logs.go:123] Gathering logs for container status ...
	I1210 00:03:29.216657   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:26.774169   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:26.786888   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:26.786964   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:26.820391   84547 cri.go:89] found id: ""
	I1210 00:03:26.820423   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.820433   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:26.820441   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:26.820504   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:26.853927   84547 cri.go:89] found id: ""
	I1210 00:03:26.853955   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.853963   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:26.853971   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:26.854031   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:26.887309   84547 cri.go:89] found id: ""
	I1210 00:03:26.887337   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.887347   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:26.887353   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:26.887415   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:26.923869   84547 cri.go:89] found id: ""
	I1210 00:03:26.923897   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.923908   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:26.923915   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:26.923983   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:26.959919   84547 cri.go:89] found id: ""
	I1210 00:03:26.959952   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.959964   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:26.959971   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:26.960032   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:26.993742   84547 cri.go:89] found id: ""
	I1210 00:03:26.993775   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.993787   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:26.993794   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:26.993853   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:27.026977   84547 cri.go:89] found id: ""
	I1210 00:03:27.027011   84547 logs.go:282] 0 containers: []
	W1210 00:03:27.027021   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:27.027028   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:27.027090   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:27.064135   84547 cri.go:89] found id: ""
	I1210 00:03:27.064164   84547 logs.go:282] 0 containers: []
	W1210 00:03:27.064173   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:27.064181   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:27.064192   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:27.101758   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:27.101784   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:27.155536   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:27.155592   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:27.168891   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:27.168914   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:27.242486   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:27.242511   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:27.242525   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:29.821740   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:29.834536   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:29.834604   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:29.868316   84547 cri.go:89] found id: ""
	I1210 00:03:29.868341   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.868348   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:29.868354   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:29.868402   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:29.902290   84547 cri.go:89] found id: ""
	I1210 00:03:29.902320   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.902331   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:29.902338   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:29.902399   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:29.948750   84547 cri.go:89] found id: ""
	I1210 00:03:29.948780   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.948792   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:29.948800   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:29.948864   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:29.994587   84547 cri.go:89] found id: ""
	I1210 00:03:29.994618   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.994629   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:29.994636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:29.994694   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:30.042216   84547 cri.go:89] found id: ""
	I1210 00:03:30.042243   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.042256   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:30.042264   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:30.042345   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:30.075964   84547 cri.go:89] found id: ""
	I1210 00:03:30.075989   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.075999   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:30.076007   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:30.076072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:30.108647   84547 cri.go:89] found id: ""
	I1210 00:03:30.108676   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.108687   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:30.108695   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:30.108760   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:30.140983   84547 cri.go:89] found id: ""
	I1210 00:03:30.141013   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.141022   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:30.141030   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:30.141040   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:30.198281   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:30.198319   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:30.213820   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:30.213849   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:30.283907   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:30.283927   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:30.283940   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:30.360731   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:30.360768   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:27.197512   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:29.698238   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:31.698679   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:28.757444   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:31.255481   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:31.759061   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1210 00:03:31.763064   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 200:
	ok
	I1210 00:03:31.763974   83900 api_server.go:141] control plane version: v1.31.2
	I1210 00:03:31.763992   83900 api_server.go:131] duration metric: took 3.83888731s to wait for apiserver health ...
	I1210 00:03:31.763999   83900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:03:31.764021   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:31.764077   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:31.798282   83900 cri.go:89] found id: "07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:31.798312   83900 cri.go:89] found id: ""
	I1210 00:03:31.798320   83900 logs.go:282] 1 containers: [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7]
	I1210 00:03:31.798375   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.802661   83900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:31.802726   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:31.837861   83900 cri.go:89] found id: "c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:31.837888   83900 cri.go:89] found id: ""
	I1210 00:03:31.837896   83900 logs.go:282] 1 containers: [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9]
	I1210 00:03:31.837943   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.842139   83900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:31.842224   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:31.876940   83900 cri.go:89] found id: "db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:31.876967   83900 cri.go:89] found id: ""
	I1210 00:03:31.876977   83900 logs.go:282] 1 containers: [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71]
	I1210 00:03:31.877043   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.881416   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:31.881491   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:31.915449   83900 cri.go:89] found id: "f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:31.915481   83900 cri.go:89] found id: ""
	I1210 00:03:31.915491   83900 logs.go:282] 1 containers: [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de]
	I1210 00:03:31.915575   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.919342   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:31.919400   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:31.954554   83900 cri.go:89] found id: "a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:31.954583   83900 cri.go:89] found id: ""
	I1210 00:03:31.954592   83900 logs.go:282] 1 containers: [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994]
	I1210 00:03:31.954641   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.958484   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:31.958569   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:32.000361   83900 cri.go:89] found id: "a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:32.000390   83900 cri.go:89] found id: ""
	I1210 00:03:32.000401   83900 logs.go:282] 1 containers: [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538]
	I1210 00:03:32.000462   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:32.004339   83900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:32.004400   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:32.043231   83900 cri.go:89] found id: ""
	I1210 00:03:32.043254   83900 logs.go:282] 0 containers: []
	W1210 00:03:32.043279   83900 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:32.043285   83900 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:03:32.043334   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:03:32.079572   83900 cri.go:89] found id: "b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:32.079597   83900 cri.go:89] found id: "e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:32.079608   83900 cri.go:89] found id: ""
	I1210 00:03:32.079615   83900 logs.go:282] 2 containers: [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1]
	I1210 00:03:32.079661   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:32.083526   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:32.087101   83900 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:32.087126   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:32.494977   83900 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:32.495021   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:32.509299   83900 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:32.509342   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:03:32.612029   83900 logs.go:123] Gathering logs for kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] ...
	I1210 00:03:32.612066   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:32.656868   83900 logs.go:123] Gathering logs for kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] ...
	I1210 00:03:32.656900   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:32.695347   83900 logs.go:123] Gathering logs for storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] ...
	I1210 00:03:32.695376   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:32.732071   83900 logs.go:123] Gathering logs for storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] ...
	I1210 00:03:32.732100   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:32.767692   83900 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:32.767718   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:32.836047   83900 logs.go:123] Gathering logs for etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] ...
	I1210 00:03:32.836088   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:32.887136   83900 logs.go:123] Gathering logs for coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] ...
	I1210 00:03:32.887175   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:32.929836   83900 logs.go:123] Gathering logs for kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] ...
	I1210 00:03:32.929873   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:32.972459   83900 logs.go:123] Gathering logs for kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] ...
	I1210 00:03:32.972492   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:33.029387   83900 logs.go:123] Gathering logs for container status ...
	I1210 00:03:33.029415   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:32.904302   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:32.919123   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:32.919209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:32.952847   84547 cri.go:89] found id: ""
	I1210 00:03:32.952879   84547 logs.go:282] 0 containers: []
	W1210 00:03:32.952889   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:32.952897   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:32.952961   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:32.986993   84547 cri.go:89] found id: ""
	I1210 00:03:32.987020   84547 logs.go:282] 0 containers: []
	W1210 00:03:32.987029   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:32.987035   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:32.987085   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:33.027506   84547 cri.go:89] found id: ""
	I1210 00:03:33.027536   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.027548   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:33.027556   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:33.027630   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:33.061553   84547 cri.go:89] found id: ""
	I1210 00:03:33.061588   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.061605   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:33.061613   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:33.061673   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:33.097640   84547 cri.go:89] found id: ""
	I1210 00:03:33.097679   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.097693   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:33.097702   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:33.097783   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:33.131725   84547 cri.go:89] found id: ""
	I1210 00:03:33.131758   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.131768   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:33.131775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:33.131839   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:33.165731   84547 cri.go:89] found id: ""
	I1210 00:03:33.165759   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.165771   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:33.165779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:33.165841   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:33.200288   84547 cri.go:89] found id: ""
	I1210 00:03:33.200310   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.200320   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:33.200338   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:33.200351   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:33.251524   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:33.251577   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:33.266197   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:33.266223   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:33.343529   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:33.343583   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:33.343600   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:33.420133   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:33.420175   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:35.577482   83900 system_pods.go:59] 8 kube-system pods found
	I1210 00:03:35.577511   83900 system_pods.go:61] "coredns-7c65d6cfc9-qvtlr" [1ef5707d-5f06-46d0-809b-da79f79c49e5] Running
	I1210 00:03:35.577516   83900 system_pods.go:61] "etcd-embed-certs-825613" [3213929f-0200-4e7d-9b69-967463bfc194] Running
	I1210 00:03:35.577520   83900 system_pods.go:61] "kube-apiserver-embed-certs-825613" [d64b8c55-7da1-4b8e-864e-0727676b17a1] Running
	I1210 00:03:35.577524   83900 system_pods.go:61] "kube-controller-manager-embed-certs-825613" [61372146-3fe1-4f4e-9d8e-d85cc5ac8369] Running
	I1210 00:03:35.577527   83900 system_pods.go:61] "kube-proxy-rn6fg" [6db02558-bfa6-4c5f-a120-aed13575b273] Running
	I1210 00:03:35.577529   83900 system_pods.go:61] "kube-scheduler-embed-certs-825613" [b9c75ecd-add3-4a19-86e0-08326eea0f6b] Running
	I1210 00:03:35.577537   83900 system_pods.go:61] "metrics-server-6867b74b74-hg7c5" [2a657b1b-4435-42b5-aef2-deebf7865c83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:03:35.577542   83900 system_pods.go:61] "storage-provisioner" [e5cabae9-bb71-4b5e-9a43-0dce4a0733a1] Running
	I1210 00:03:35.577549   83900 system_pods.go:74] duration metric: took 3.81354476s to wait for pod list to return data ...
	I1210 00:03:35.577556   83900 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:03:35.579951   83900 default_sa.go:45] found service account: "default"
	I1210 00:03:35.579971   83900 default_sa.go:55] duration metric: took 2.410307ms for default service account to be created ...
	I1210 00:03:35.579977   83900 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:03:35.584102   83900 system_pods.go:86] 8 kube-system pods found
	I1210 00:03:35.584126   83900 system_pods.go:89] "coredns-7c65d6cfc9-qvtlr" [1ef5707d-5f06-46d0-809b-da79f79c49e5] Running
	I1210 00:03:35.584131   83900 system_pods.go:89] "etcd-embed-certs-825613" [3213929f-0200-4e7d-9b69-967463bfc194] Running
	I1210 00:03:35.584136   83900 system_pods.go:89] "kube-apiserver-embed-certs-825613" [d64b8c55-7da1-4b8e-864e-0727676b17a1] Running
	I1210 00:03:35.584142   83900 system_pods.go:89] "kube-controller-manager-embed-certs-825613" [61372146-3fe1-4f4e-9d8e-d85cc5ac8369] Running
	I1210 00:03:35.584148   83900 system_pods.go:89] "kube-proxy-rn6fg" [6db02558-bfa6-4c5f-a120-aed13575b273] Running
	I1210 00:03:35.584153   83900 system_pods.go:89] "kube-scheduler-embed-certs-825613" [b9c75ecd-add3-4a19-86e0-08326eea0f6b] Running
	I1210 00:03:35.584163   83900 system_pods.go:89] "metrics-server-6867b74b74-hg7c5" [2a657b1b-4435-42b5-aef2-deebf7865c83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:03:35.584168   83900 system_pods.go:89] "storage-provisioner" [e5cabae9-bb71-4b5e-9a43-0dce4a0733a1] Running
	I1210 00:03:35.584181   83900 system_pods.go:126] duration metric: took 4.196356ms to wait for k8s-apps to be running ...
	I1210 00:03:35.584192   83900 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:03:35.584235   83900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:03:35.598192   83900 system_svc.go:56] duration metric: took 13.993491ms WaitForService to wait for kubelet
	I1210 00:03:35.598218   83900 kubeadm.go:582] duration metric: took 4m25.529459505s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:03:35.598238   83900 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:03:35.600992   83900 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:03:35.601012   83900 node_conditions.go:123] node cpu capacity is 2
	I1210 00:03:35.601025   83900 node_conditions.go:105] duration metric: took 2.782415ms to run NodePressure ...
	I1210 00:03:35.601038   83900 start.go:241] waiting for startup goroutines ...
	I1210 00:03:35.601049   83900 start.go:246] waiting for cluster config update ...
	I1210 00:03:35.601063   83900 start.go:255] writing updated cluster config ...
	I1210 00:03:35.601360   83900 ssh_runner.go:195] Run: rm -f paused
	I1210 00:03:35.647487   83900 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:03:35.650508   83900 out.go:177] * Done! kubectl is now configured to use "embed-certs-825613" cluster and "default" namespace by default
	I1210 00:03:34.199682   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:36.696731   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:33.258255   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:35.756802   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:35.971411   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:35.988993   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:35.989059   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:36.022639   84547 cri.go:89] found id: ""
	I1210 00:03:36.022673   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.022684   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:36.022692   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:36.022758   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:36.055421   84547 cri.go:89] found id: ""
	I1210 00:03:36.055452   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.055461   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:36.055466   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:36.055514   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:36.087690   84547 cri.go:89] found id: ""
	I1210 00:03:36.087721   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.087731   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:36.087738   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:36.087802   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:36.125209   84547 cri.go:89] found id: ""
	I1210 00:03:36.125240   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.125249   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:36.125254   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:36.125304   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:36.157544   84547 cri.go:89] found id: ""
	I1210 00:03:36.157586   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.157599   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:36.157607   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:36.157676   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:36.193415   84547 cri.go:89] found id: ""
	I1210 00:03:36.193448   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.193459   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:36.193466   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:36.193525   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:36.228941   84547 cri.go:89] found id: ""
	I1210 00:03:36.228971   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.228982   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:36.228989   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:36.229052   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:36.264821   84547 cri.go:89] found id: ""
	I1210 00:03:36.264851   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.264862   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:36.264873   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:36.264887   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:36.314841   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:36.314881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:36.328664   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:36.328695   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:36.402929   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:36.402956   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:36.402970   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:36.480270   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:36.480306   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:39.022748   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:39.036323   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:39.036382   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:39.070582   84547 cri.go:89] found id: ""
	I1210 00:03:39.070608   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.070616   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:39.070622   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:39.070690   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:39.108783   84547 cri.go:89] found id: ""
	I1210 00:03:39.108814   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.108825   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:39.108832   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:39.108889   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:39.142300   84547 cri.go:89] found id: ""
	I1210 00:03:39.142330   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.142338   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:39.142344   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:39.142400   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:39.177859   84547 cri.go:89] found id: ""
	I1210 00:03:39.177891   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.177903   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:39.177911   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:39.177965   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:39.212109   84547 cri.go:89] found id: ""
	I1210 00:03:39.212140   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.212152   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:39.212160   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:39.212209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:39.245451   84547 cri.go:89] found id: ""
	I1210 00:03:39.245490   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.245501   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:39.245509   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:39.245569   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:39.278925   84547 cri.go:89] found id: ""
	I1210 00:03:39.278957   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.278967   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:39.278974   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:39.279063   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:39.312322   84547 cri.go:89] found id: ""
	I1210 00:03:39.312350   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.312358   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:39.312367   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:39.312377   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:39.362844   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:39.362882   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:39.375938   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:39.375967   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:39.443649   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:39.443676   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:39.443691   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:39.523722   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:39.523757   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:38.698105   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:41.198814   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:38.256567   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:40.257434   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:42.062317   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:42.075094   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:42.075157   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:42.106719   84547 cri.go:89] found id: ""
	I1210 00:03:42.106744   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.106751   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:42.106757   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:42.106805   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:42.141977   84547 cri.go:89] found id: ""
	I1210 00:03:42.142011   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.142022   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:42.142029   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:42.142089   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:42.175117   84547 cri.go:89] found id: ""
	I1210 00:03:42.175151   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.175164   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:42.175172   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:42.175243   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:42.209871   84547 cri.go:89] found id: ""
	I1210 00:03:42.209900   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.209911   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:42.209919   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:42.209982   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:42.252784   84547 cri.go:89] found id: ""
	I1210 00:03:42.252812   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.252822   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:42.252830   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:42.252891   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:42.285934   84547 cri.go:89] found id: ""
	I1210 00:03:42.285961   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.285973   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:42.285980   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:42.286040   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:42.327393   84547 cri.go:89] found id: ""
	I1210 00:03:42.327423   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.327433   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:42.327440   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:42.327498   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:42.362244   84547 cri.go:89] found id: ""
	I1210 00:03:42.362279   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.362289   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:42.362302   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:42.362317   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:42.441656   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:42.441691   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:42.479357   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:42.479397   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:42.533002   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:42.533038   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:42.546485   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:42.546514   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:42.616912   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:45.117156   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:45.131265   84547 kubeadm.go:597] duration metric: took 4m2.157458218s to restartPrimaryControlPlane
	W1210 00:03:45.131350   84547 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 00:03:45.131379   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:03:43.699026   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:46.197420   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:42.756686   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:44.251077   84259 pod_ready.go:82] duration metric: took 4m0.000996899s for pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace to be "Ready" ...
	E1210 00:03:44.251112   84259 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 00:03:44.251136   84259 pod_ready.go:39] duration metric: took 4m13.15350289s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:03:44.251170   84259 kubeadm.go:597] duration metric: took 4m23.227405415s to restartPrimaryControlPlane
	W1210 00:03:44.251238   84259 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 00:03:44.251368   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:03:46.778025   84547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.646620044s)
	I1210 00:03:46.778099   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:03:46.792384   84547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:03:46.802897   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:03:46.813393   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:03:46.813417   84547 kubeadm.go:157] found existing configuration files:
	
	I1210 00:03:46.813473   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:03:46.823016   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:03:46.823093   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:03:46.832912   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:03:46.841961   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:03:46.842019   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:03:46.851160   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:03:46.859879   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:03:46.859938   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:03:46.869192   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:03:46.878423   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:03:46.878487   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:03:46.888103   84547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:03:47.105146   84547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:03:48.698211   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:51.197463   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:53.198578   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:55.697251   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:57.698289   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:00.198291   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:02.696926   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:04.697431   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:06.698260   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:10.270952   84259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.019545181s)
	I1210 00:04:10.271025   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:04:10.292112   84259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:04:10.307538   84259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:04:10.325375   84259 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:04:10.325406   84259 kubeadm.go:157] found existing configuration files:
	
	I1210 00:04:10.325465   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 00:04:10.338892   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:04:10.338960   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:04:10.353888   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 00:04:10.364787   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:04:10.364852   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:04:10.374513   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 00:04:10.393430   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:04:10.393486   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:04:10.403621   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 00:04:10.412759   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:04:10.412824   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:04:10.422153   84259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:04:10.468789   84259 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 00:04:10.468852   84259 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:04:10.566712   84259 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:04:10.566840   84259 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:04:10.566939   84259 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 00:04:10.574253   84259 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:04:10.576376   84259 out.go:235]   - Generating certificates and keys ...
	I1210 00:04:10.576485   84259 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:04:10.576566   84259 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:04:10.576722   84259 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:04:10.576816   84259 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:04:10.576915   84259 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:04:10.576991   84259 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:04:10.577107   84259 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:04:10.577214   84259 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:04:10.577319   84259 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:04:10.577433   84259 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:04:10.577522   84259 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:04:10.577616   84259 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:04:10.870887   84259 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:04:10.977055   84259 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 00:04:11.160320   84259 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:04:11.207864   84259 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:04:11.315037   84259 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:04:11.315715   84259 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:04:11.319135   84259 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:04:09.197254   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:11.197831   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:11.321063   84259 out.go:235]   - Booting up control plane ...
	I1210 00:04:11.321193   84259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:04:11.321296   84259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:04:11.322134   84259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:04:11.341567   84259 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:04:11.347658   84259 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:04:11.347736   84259 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:04:11.492127   84259 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 00:04:11.492293   84259 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 00:04:11.994410   84259 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.01471ms
	I1210 00:04:11.994533   84259 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 00:04:13.697731   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:16.198771   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:16.998638   84259 kubeadm.go:310] [api-check] The API server is healthy after 5.002934845s
	I1210 00:04:17.009930   84259 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 00:04:17.025483   84259 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 00:04:17.059445   84259 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 00:04:17.059762   84259 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-871210 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 00:04:17.071665   84259 kubeadm.go:310] [bootstrap-token] Using token: 1x1r34.gs3p33sqgju9dylj
	I1210 00:04:17.073936   84259 out.go:235]   - Configuring RBAC rules ...
	I1210 00:04:17.074055   84259 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 00:04:17.079920   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 00:04:17.090408   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 00:04:17.094072   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 00:04:17.096951   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 00:04:17.099929   84259 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 00:04:17.404431   84259 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 00:04:17.837125   84259 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 00:04:18.404721   84259 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 00:04:18.405661   84259 kubeadm.go:310] 
	I1210 00:04:18.405757   84259 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 00:04:18.405770   84259 kubeadm.go:310] 
	I1210 00:04:18.405871   84259 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 00:04:18.405883   84259 kubeadm.go:310] 
	I1210 00:04:18.405916   84259 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 00:04:18.406012   84259 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 00:04:18.406101   84259 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 00:04:18.406111   84259 kubeadm.go:310] 
	I1210 00:04:18.406197   84259 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 00:04:18.406209   84259 kubeadm.go:310] 
	I1210 00:04:18.406318   84259 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 00:04:18.406329   84259 kubeadm.go:310] 
	I1210 00:04:18.406412   84259 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 00:04:18.406482   84259 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 00:04:18.406544   84259 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 00:04:18.406550   84259 kubeadm.go:310] 
	I1210 00:04:18.406643   84259 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 00:04:18.406787   84259 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 00:04:18.406802   84259 kubeadm.go:310] 
	I1210 00:04:18.406919   84259 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 1x1r34.gs3p33sqgju9dylj \
	I1210 00:04:18.407089   84259 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b \
	I1210 00:04:18.407117   84259 kubeadm.go:310] 	--control-plane 
	I1210 00:04:18.407123   84259 kubeadm.go:310] 
	I1210 00:04:18.407207   84259 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 00:04:18.407214   84259 kubeadm.go:310] 
	I1210 00:04:18.407300   84259 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 1x1r34.gs3p33sqgju9dylj \
	I1210 00:04:18.407433   84259 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b 
	I1210 00:04:18.408197   84259 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:04:18.408272   84259 cni.go:84] Creating CNI manager for ""
	I1210 00:04:18.408289   84259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:04:18.410841   84259 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:04:18.697285   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:20.698266   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:18.412209   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:04:18.422123   84259 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:04:18.440972   84259 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:04:18.441058   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:18.441138   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-871210 minikube.k8s.io/updated_at=2024_12_10T00_04_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=default-k8s-diff-port-871210 minikube.k8s.io/primary=true
	I1210 00:04:18.462185   84259 ops.go:34] apiserver oom_adj: -16
	I1210 00:04:18.625855   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:19.126760   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:19.626829   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:20.126663   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:20.626893   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:21.126851   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:21.625892   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:22.126904   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:22.626450   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:22.715909   84259 kubeadm.go:1113] duration metric: took 4.274924102s to wait for elevateKubeSystemPrivileges
	I1210 00:04:22.715958   84259 kubeadm.go:394] duration metric: took 5m1.740971462s to StartCluster
	I1210 00:04:22.715982   84259 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:04:22.716080   84259 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1210 00:04:22.718538   84259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:04:22.718890   84259 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:04:22.718958   84259 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:04:22.719059   84259 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-871210"
	I1210 00:04:22.719079   84259 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-871210"
	W1210 00:04:22.719091   84259 addons.go:243] addon storage-provisioner should already be in state true
	I1210 00:04:22.719100   84259 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-871210"
	I1210 00:04:22.719123   84259 host.go:66] Checking if "default-k8s-diff-port-871210" exists ...
	I1210 00:04:22.719120   84259 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-871210"
	I1210 00:04:22.719169   84259 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-871210"
	W1210 00:04:22.719184   84259 addons.go:243] addon metrics-server should already be in state true
	I1210 00:04:22.719216   84259 host.go:66] Checking if "default-k8s-diff-port-871210" exists ...
	I1210 00:04:22.719133   84259 config.go:182] Loaded profile config "default-k8s-diff-port-871210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:04:22.719134   84259 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-871210"
	I1210 00:04:22.719582   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.719631   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.719717   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.719728   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.719753   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.719763   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.720519   84259 out.go:177] * Verifying Kubernetes components...
	I1210 00:04:22.722140   84259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:04:22.735592   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44099
	I1210 00:04:22.735716   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36311
	I1210 00:04:22.736075   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.736100   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.736605   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.736626   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.736605   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.736642   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.736995   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.737007   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.737217   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.737595   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.737639   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.739278   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41255
	I1210 00:04:22.741315   84259 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-871210"
	W1210 00:04:22.741337   84259 addons.go:243] addon default-storageclass should already be in state true
	I1210 00:04:22.741367   84259 host.go:66] Checking if "default-k8s-diff-port-871210" exists ...
	I1210 00:04:22.741739   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.741778   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.742259   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.742829   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.742858   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.743182   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.743632   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.743663   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.754879   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1210 00:04:22.755310   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.756015   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.756032   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.756397   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.756576   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.758535   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1210 00:04:22.759514   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33065
	I1210 00:04:22.759891   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.759925   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39821
	I1210 00:04:22.760309   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.760330   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.760346   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.760435   84259 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 00:04:22.760689   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.760831   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.760845   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.760860   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.761166   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.761589   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.761627   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.761737   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 00:04:22.761754   84259 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 00:04:22.761774   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1210 00:04:22.762882   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1210 00:04:22.764440   84259 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:04:22.764810   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.765316   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1210 00:04:22.765339   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.765474   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1210 00:04:22.765675   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1210 00:04:22.765828   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1210 00:04:22.765894   84259 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:04:22.765912   84259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:04:22.765919   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1210 00:04:22.765934   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1210 00:04:22.768979   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.769446   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1210 00:04:22.769498   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.769626   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1210 00:04:22.769832   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1210 00:04:22.769956   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1210 00:04:22.770078   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1210 00:04:22.780995   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45841
	I1210 00:04:22.781451   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.782077   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.782098   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.782467   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.782705   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.784771   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1210 00:04:22.785015   84259 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:04:22.785030   84259 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:04:22.785047   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1210 00:04:22.788208   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.788659   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1210 00:04:22.788690   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.788771   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1210 00:04:22.789148   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1210 00:04:22.789275   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1210 00:04:22.789386   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1210 00:04:22.926076   84259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:04:22.945059   84259 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-871210" to be "Ready" ...
	I1210 00:04:22.970684   84259 node_ready.go:49] node "default-k8s-diff-port-871210" has status "Ready":"True"
	I1210 00:04:22.970727   84259 node_ready.go:38] duration metric: took 25.618738ms for node "default-k8s-diff-port-871210" to be "Ready" ...
	I1210 00:04:22.970740   84259 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:04:22.989661   84259 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:23.045411   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 00:04:23.045433   84259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 00:04:23.074907   84259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:04:23.076857   84259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:04:23.094809   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 00:04:23.094836   84259 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 00:04:23.125107   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:04:23.125136   84259 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 00:04:23.174286   84259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:04:23.554521   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554556   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.554568   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554589   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.554855   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.554871   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.554880   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554882   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.554888   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.554893   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.554891   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.554903   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554913   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.555087   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.555099   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.555283   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.555354   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.555379   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.579741   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.579775   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.580075   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.580091   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.580113   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.776674   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.776701   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.777020   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.777060   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.777068   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.777076   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.777089   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.777314   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.777330   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.777347   84259 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-871210"
	I1210 00:04:23.778992   84259 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 00:04:23.198263   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:25.198299   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:23.780354   84259 addons.go:510] duration metric: took 1.061403814s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 00:04:25.012490   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:27.495987   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:27.697345   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:29.698123   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:31.692183   83859 pod_ready.go:82] duration metric: took 4m0.000797786s for pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace to be "Ready" ...
	E1210 00:04:31.692211   83859 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 00:04:31.692228   83859 pod_ready.go:39] duration metric: took 4m11.541153015s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:04:31.692258   83859 kubeadm.go:597] duration metric: took 4m19.334550967s to restartPrimaryControlPlane
	W1210 00:04:31.692305   83859 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 00:04:31.692329   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:04:29.995452   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:31.997927   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:34.496689   84259 pod_ready.go:93] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.496716   84259 pod_ready.go:82] duration metric: took 11.507027662s for pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.496730   84259 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.501166   84259 pod_ready.go:93] pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.501188   84259 pod_ready.go:82] duration metric: took 4.45016ms for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.501198   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.506061   84259 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.506085   84259 pod_ready.go:82] duration metric: took 4.881919ms for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.506096   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.510401   84259 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.510422   84259 pod_ready.go:82] duration metric: took 4.320761ms for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.510432   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pj85d" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.514319   84259 pod_ready.go:93] pod "kube-proxy-pj85d" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.514340   84259 pod_ready.go:82] duration metric: took 3.902541ms for pod "kube-proxy-pj85d" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.514348   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.896369   84259 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.896393   84259 pod_ready.go:82] duration metric: took 382.038726ms for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.896401   84259 pod_ready.go:39] duration metric: took 11.925650242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:04:34.896415   84259 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:04:34.896466   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:04:34.910589   84259 api_server.go:72] duration metric: took 12.191654524s to wait for apiserver process to appear ...
	I1210 00:04:34.910617   84259 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:04:34.910639   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1210 00:04:34.916503   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I1210 00:04:34.917955   84259 api_server.go:141] control plane version: v1.31.2
	I1210 00:04:34.917979   84259 api_server.go:131] duration metric: took 7.355946ms to wait for apiserver health ...
	I1210 00:04:34.917987   84259 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:04:35.098220   84259 system_pods.go:59] 9 kube-system pods found
	I1210 00:04:35.098252   84259 system_pods.go:61] "coredns-7c65d6cfc9-7xpcc" [6cff7e56-1785-41c8-bd9c-db9d3f0bd05f] Running
	I1210 00:04:35.098259   84259 system_pods.go:61] "coredns-7c65d6cfc9-z2n25" [b6b81952-7281-4705-9536-06eb939a5807] Running
	I1210 00:04:35.098265   84259 system_pods.go:61] "etcd-default-k8s-diff-port-871210" [e9c747a1-72c0-4b30-b401-c39a41fe5eb5] Running
	I1210 00:04:35.098271   84259 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-871210" [1a27b619-b576-4a36-b88a-0c498ad38628] Running
	I1210 00:04:35.098276   84259 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-871210" [d22a69b7-3389-46a9-9ca5-a3101aa34e05] Running
	I1210 00:04:35.098280   84259 system_pods.go:61] "kube-proxy-pj85d" [d1b9b056-f4a3-419c-86fa-a94d88464f74] Running
	I1210 00:04:35.098285   84259 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-871210" [50a03a5c-d0e5-454a-9a7b-49963ff59d06] Running
	I1210 00:04:35.098294   84259 system_pods.go:61] "metrics-server-6867b74b74-7g2qm" [49ac129a-c85d-4af1-b3b2-06bc10bced77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:04:35.098298   84259 system_pods.go:61] "storage-provisioner" [ea716edd-4030-4ec3-b094-c3a50154b473] Running
	I1210 00:04:35.098309   84259 system_pods.go:74] duration metric: took 180.315426ms to wait for pod list to return data ...
	I1210 00:04:35.098322   84259 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:04:35.294342   84259 default_sa.go:45] found service account: "default"
	I1210 00:04:35.294367   84259 default_sa.go:55] duration metric: took 196.039183ms for default service account to be created ...
	I1210 00:04:35.294376   84259 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:04:35.497331   84259 system_pods.go:86] 9 kube-system pods found
	I1210 00:04:35.497364   84259 system_pods.go:89] "coredns-7c65d6cfc9-7xpcc" [6cff7e56-1785-41c8-bd9c-db9d3f0bd05f] Running
	I1210 00:04:35.497369   84259 system_pods.go:89] "coredns-7c65d6cfc9-z2n25" [b6b81952-7281-4705-9536-06eb939a5807] Running
	I1210 00:04:35.497373   84259 system_pods.go:89] "etcd-default-k8s-diff-port-871210" [e9c747a1-72c0-4b30-b401-c39a41fe5eb5] Running
	I1210 00:04:35.497377   84259 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-871210" [1a27b619-b576-4a36-b88a-0c498ad38628] Running
	I1210 00:04:35.497381   84259 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-871210" [d22a69b7-3389-46a9-9ca5-a3101aa34e05] Running
	I1210 00:04:35.497384   84259 system_pods.go:89] "kube-proxy-pj85d" [d1b9b056-f4a3-419c-86fa-a94d88464f74] Running
	I1210 00:04:35.497387   84259 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-871210" [50a03a5c-d0e5-454a-9a7b-49963ff59d06] Running
	I1210 00:04:35.497396   84259 system_pods.go:89] "metrics-server-6867b74b74-7g2qm" [49ac129a-c85d-4af1-b3b2-06bc10bced77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:04:35.497400   84259 system_pods.go:89] "storage-provisioner" [ea716edd-4030-4ec3-b094-c3a50154b473] Running
	I1210 00:04:35.497409   84259 system_pods.go:126] duration metric: took 203.02694ms to wait for k8s-apps to be running ...
	I1210 00:04:35.497416   84259 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:04:35.497456   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:04:35.512217   84259 system_svc.go:56] duration metric: took 14.792056ms WaitForService to wait for kubelet
	I1210 00:04:35.512248   84259 kubeadm.go:582] duration metric: took 12.793318604s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:04:35.512274   84259 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:04:35.695292   84259 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:04:35.695318   84259 node_conditions.go:123] node cpu capacity is 2
	I1210 00:04:35.695329   84259 node_conditions.go:105] duration metric: took 183.048181ms to run NodePressure ...
	I1210 00:04:35.695341   84259 start.go:241] waiting for startup goroutines ...
	I1210 00:04:35.695348   84259 start.go:246] waiting for cluster config update ...
	I1210 00:04:35.695361   84259 start.go:255] writing updated cluster config ...
	I1210 00:04:35.695666   84259 ssh_runner.go:195] Run: rm -f paused
	I1210 00:04:35.742539   84259 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:04:35.744394   84259 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-871210" cluster and "default" namespace by default
	I1210 00:04:57.851348   83859 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.159001958s)
	I1210 00:04:57.851413   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:04:57.866601   83859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:04:57.876643   83859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:04:57.886172   83859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:04:57.886200   83859 kubeadm.go:157] found existing configuration files:
	
	I1210 00:04:57.886252   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:04:57.895643   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:04:57.895722   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:04:57.905397   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:04:57.914236   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:04:57.914299   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:04:57.923225   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:04:57.932422   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:04:57.932476   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:04:57.942840   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:04:57.952087   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:04:57.952159   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:04:57.961371   83859 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:04:58.005314   83859 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 00:04:58.005444   83859 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:04:58.098287   83859 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:04:58.098431   83859 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:04:58.098591   83859 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 00:04:58.106525   83859 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:04:58.109142   83859 out.go:235]   - Generating certificates and keys ...
	I1210 00:04:58.109219   83859 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:04:58.109320   83859 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:04:58.109456   83859 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:04:58.109536   83859 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:04:58.109637   83859 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:04:58.109728   83859 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:04:58.109840   83859 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:04:58.109940   83859 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:04:58.110047   83859 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:04:58.110152   83859 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:04:58.110225   83859 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:04:58.110296   83859 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:04:58.357649   83859 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:04:58.505840   83859 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 00:04:58.890560   83859 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:04:58.965928   83859 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:04:59.341665   83859 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:04:59.342240   83859 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:04:59.344644   83859 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:04:59.346225   83859 out.go:235]   - Booting up control plane ...
	I1210 00:04:59.346353   83859 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:04:59.348071   83859 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:04:59.348893   83859 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:04:59.366824   83859 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:04:59.372906   83859 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:04:59.372962   83859 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:04:59.509554   83859 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 00:04:59.509695   83859 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 00:05:00.014472   83859 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.085309ms
	I1210 00:05:00.014639   83859 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 00:05:05.017162   83859 kubeadm.go:310] [api-check] The API server is healthy after 5.002677832s
	I1210 00:05:05.029078   83859 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 00:05:05.048524   83859 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 00:05:05.080458   83859 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 00:05:05.080735   83859 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-048296 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 00:05:05.091793   83859 kubeadm.go:310] [bootstrap-token] Using token: y5jzuu.af7syybsclcivlzq
	I1210 00:05:05.093253   83859 out.go:235]   - Configuring RBAC rules ...
	I1210 00:05:05.093401   83859 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 00:05:05.098842   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 00:05:05.106341   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 00:05:05.109935   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 00:05:05.114403   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 00:05:05.121436   83859 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 00:05:05.424690   83859 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 00:05:05.851096   83859 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 00:05:06.425724   83859 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 00:05:06.426713   83859 kubeadm.go:310] 
	I1210 00:05:06.426785   83859 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 00:05:06.426808   83859 kubeadm.go:310] 
	I1210 00:05:06.426904   83859 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 00:05:06.426936   83859 kubeadm.go:310] 
	I1210 00:05:06.426981   83859 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 00:05:06.427061   83859 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 00:05:06.427110   83859 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 00:05:06.427120   83859 kubeadm.go:310] 
	I1210 00:05:06.427199   83859 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 00:05:06.427210   83859 kubeadm.go:310] 
	I1210 00:05:06.427282   83859 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 00:05:06.427294   83859 kubeadm.go:310] 
	I1210 00:05:06.427381   83859 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 00:05:06.427486   83859 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 00:05:06.427623   83859 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 00:05:06.427635   83859 kubeadm.go:310] 
	I1210 00:05:06.427757   83859 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 00:05:06.427874   83859 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 00:05:06.427905   83859 kubeadm.go:310] 
	I1210 00:05:06.428032   83859 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y5jzuu.af7syybsclcivlzq \
	I1210 00:05:06.428167   83859 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b \
	I1210 00:05:06.428201   83859 kubeadm.go:310] 	--control-plane 
	I1210 00:05:06.428211   83859 kubeadm.go:310] 
	I1210 00:05:06.428322   83859 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 00:05:06.428332   83859 kubeadm.go:310] 
	I1210 00:05:06.428438   83859 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y5jzuu.af7syybsclcivlzq \
	I1210 00:05:06.428572   83859 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b 
	I1210 00:05:06.428746   83859 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:05:06.428832   83859 cni.go:84] Creating CNI manager for ""
	I1210 00:05:06.428849   83859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:05:06.431674   83859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:05:06.433006   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:05:06.444058   83859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:05:06.462707   83859 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:05:06.462838   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:06.462873   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-048296 minikube.k8s.io/updated_at=2024_12_10T00_05_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=no-preload-048296 minikube.k8s.io/primary=true
	I1210 00:05:06.493379   83859 ops.go:34] apiserver oom_adj: -16
	I1210 00:05:06.666080   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:07.166762   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:07.666408   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:08.166269   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:08.666734   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:09.166797   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:09.666522   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:10.166230   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:10.667061   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:10.774149   83859 kubeadm.go:1113] duration metric: took 4.311371383s to wait for elevateKubeSystemPrivileges
	I1210 00:05:10.774188   83859 kubeadm.go:394] duration metric: took 4m58.464009996s to StartCluster
	I1210 00:05:10.774211   83859 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:05:10.774290   83859 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1210 00:05:10.777711   83859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:05:10.778040   83859 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:05:10.778133   83859 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:05:10.778230   83859 addons.go:69] Setting storage-provisioner=true in profile "no-preload-048296"
	I1210 00:05:10.778247   83859 addons.go:234] Setting addon storage-provisioner=true in "no-preload-048296"
	W1210 00:05:10.778255   83859 addons.go:243] addon storage-provisioner should already be in state true
	I1210 00:05:10.778249   83859 addons.go:69] Setting default-storageclass=true in profile "no-preload-048296"
	I1210 00:05:10.778261   83859 addons.go:69] Setting metrics-server=true in profile "no-preload-048296"
	I1210 00:05:10.778276   83859 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-048296"
	I1210 00:05:10.778286   83859 host.go:66] Checking if "no-preload-048296" exists ...
	I1210 00:05:10.778299   83859 addons.go:234] Setting addon metrics-server=true in "no-preload-048296"
	I1210 00:05:10.778339   83859 config.go:182] Loaded profile config "no-preload-048296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1210 00:05:10.778394   83859 addons.go:243] addon metrics-server should already be in state true
	I1210 00:05:10.778446   83859 host.go:66] Checking if "no-preload-048296" exists ...
	I1210 00:05:10.778684   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.778716   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.778746   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.778721   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.778860   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.778897   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.779748   83859 out.go:177] * Verifying Kubernetes components...
	I1210 00:05:10.781467   83859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:05:10.795108   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34465
	I1210 00:05:10.795227   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I1210 00:05:10.795615   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.795709   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.796135   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.796160   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.796221   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.796240   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.796522   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.796539   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.797128   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.797162   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.797200   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.797167   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.797697   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44173
	I1210 00:05:10.798091   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.798587   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.798613   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.798982   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.799324   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.803238   83859 addons.go:234] Setting addon default-storageclass=true in "no-preload-048296"
	W1210 00:05:10.803262   83859 addons.go:243] addon default-storageclass should already be in state true
	I1210 00:05:10.803291   83859 host.go:66] Checking if "no-preload-048296" exists ...
	I1210 00:05:10.803683   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.803722   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.814000   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39657
	I1210 00:05:10.814046   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I1210 00:05:10.814470   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.814686   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.814992   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.815014   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.815427   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.815448   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.815515   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.815748   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.815974   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.816171   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.817989   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1210 00:05:10.818439   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1210 00:05:10.820342   83859 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 00:05:10.820360   83859 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:05:10.821535   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 00:05:10.821556   83859 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 00:05:10.821577   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1210 00:05:10.821652   83859 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:05:10.821673   83859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:05:10.821690   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1210 00:05:10.825763   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826205   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1210 00:05:10.826228   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42881
	I1210 00:05:10.826252   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826275   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1210 00:05:10.826294   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826327   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1210 00:05:10.826228   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826504   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1210 00:05:10.826563   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1210 00:05:10.826633   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.826711   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1210 00:05:10.826860   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1210 00:05:10.826864   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1210 00:05:10.827029   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1210 00:05:10.827152   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.827173   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.827241   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1210 00:05:10.827691   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.829421   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.829471   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.867902   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45189
	I1210 00:05:10.868566   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.869138   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.869169   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.869575   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.869782   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.871531   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1210 00:05:10.871792   83859 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:05:10.871810   83859 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:05:10.871832   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1210 00:05:10.874309   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.874822   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1210 00:05:10.874845   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.874999   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1210 00:05:10.875202   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1210 00:05:10.875354   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1210 00:05:10.875501   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1210 00:05:11.009426   83859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:05:11.030237   83859 node_ready.go:35] waiting up to 6m0s for node "no-preload-048296" to be "Ready" ...
	I1210 00:05:11.054344   83859 node_ready.go:49] node "no-preload-048296" has status "Ready":"True"
	I1210 00:05:11.054366   83859 node_ready.go:38] duration metric: took 24.096361ms for node "no-preload-048296" to be "Ready" ...
	I1210 00:05:11.054376   83859 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:05:11.071740   83859 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:11.123723   83859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:05:11.137744   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 00:05:11.137772   83859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 00:05:11.159680   83859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:05:11.179245   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 00:05:11.179267   83859 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 00:05:11.240553   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:05:11.240580   83859 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 00:05:11.311813   83859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:05:12.041427   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.041463   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.041509   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.041532   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.041886   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.041949   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.041954   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.041963   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.041967   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.041972   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.041932   83859 main.go:141] libmachine: (no-preload-048296) DBG | Closing plugin on server side
	I1210 00:05:12.041986   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.042087   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.042229   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.042245   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.042297   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.042319   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.092616   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.092639   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.092910   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.092923   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.535943   83859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.224081671s)
	I1210 00:05:12.536008   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.536020   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.536456   83859 main.go:141] libmachine: (no-preload-048296) DBG | Closing plugin on server side
	I1210 00:05:12.536459   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.536482   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.536491   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.536498   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.536728   83859 main.go:141] libmachine: (no-preload-048296) DBG | Closing plugin on server side
	I1210 00:05:12.536735   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.536750   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.536761   83859 addons.go:475] Verifying addon metrics-server=true in "no-preload-048296"
	I1210 00:05:12.538638   83859 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 00:05:12.539910   83859 addons.go:510] duration metric: took 1.761785575s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 00:05:13.078954   83859 pod_ready.go:103] pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:05:13.579737   83859 pod_ready.go:93] pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:13.579760   83859 pod_ready.go:82] duration metric: took 2.507994954s for pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:13.579770   83859 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8rxx7" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:14.586682   83859 pod_ready.go:93] pod "coredns-7c65d6cfc9-8rxx7" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:14.586704   83859 pod_ready.go:82] duration metric: took 1.006927767s for pod "coredns-7c65d6cfc9-8rxx7" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:14.586714   83859 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.092318   83859 pod_ready.go:93] pod "etcd-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.092345   83859 pod_ready.go:82] duration metric: took 505.624811ms for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.092356   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.096802   83859 pod_ready.go:93] pod "kube-apiserver-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.096828   83859 pod_ready.go:82] duration metric: took 4.463914ms for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.096839   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.100904   83859 pod_ready.go:93] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.100923   83859 pod_ready.go:82] duration metric: took 4.07832ms for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.100932   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qklxb" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.176527   83859 pod_ready.go:93] pod "kube-proxy-qklxb" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.176552   83859 pod_ready.go:82] duration metric: took 75.613294ms for pod "kube-proxy-qklxb" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.176562   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:17.182952   83859 pod_ready.go:103] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:05:18.181993   83859 pod_ready.go:93] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:18.182016   83859 pod_ready.go:82] duration metric: took 3.005447779s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:18.182024   83859 pod_ready.go:39] duration metric: took 7.127639413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:05:18.182038   83859 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:05:18.182084   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:05:18.196127   83859 api_server.go:72] duration metric: took 7.418043985s to wait for apiserver process to appear ...
	I1210 00:05:18.196155   83859 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:05:18.196176   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:05:18.200537   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 200:
	ok
	I1210 00:05:18.201476   83859 api_server.go:141] control plane version: v1.31.2
	I1210 00:05:18.201501   83859 api_server.go:131] duration metric: took 5.340199ms to wait for apiserver health ...
	I1210 00:05:18.201508   83859 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:05:18.208072   83859 system_pods.go:59] 9 kube-system pods found
	I1210 00:05:18.208105   83859 system_pods.go:61] "coredns-7c65d6cfc9-56djc" [2e25bad9-b88a-4ac8-a180-968bf6b057a2] Running
	I1210 00:05:18.208114   83859 system_pods.go:61] "coredns-7c65d6cfc9-8rxx7" [3fdfd41b-8ef7-41af-a703-ebecfe9ad319] Running
	I1210 00:05:18.208119   83859 system_pods.go:61] "etcd-no-preload-048296" [4447a7d6-df4b-4760-9352-4df0310017ee] Running
	I1210 00:05:18.208125   83859 system_pods.go:61] "kube-apiserver-no-preload-048296" [55a06182-1520-4497-8358-639438eeb297] Running
	I1210 00:05:18.208130   83859 system_pods.go:61] "kube-controller-manager-no-preload-048296" [f30bdb21-7742-46e7-90fe-403679f5e02a] Running
	I1210 00:05:18.208136   83859 system_pods.go:61] "kube-proxy-qklxb" [8bb029a1-abf9-4825-b9ec-0520a78cb3d8] Running
	I1210 00:05:18.208141   83859 system_pods.go:61] "kube-scheduler-no-preload-048296" [019d6d37-c586-4f12-8daf-b60752223cf1] Running
	I1210 00:05:18.208152   83859 system_pods.go:61] "metrics-server-6867b74b74-n2f8c" [8e9f56c9-fd67-4715-9148-1255be17f1fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:05:18.208163   83859 system_pods.go:61] "storage-provisioner" [55fe311f-4610-4805-9fb7-3f1cac7c96e6] Running
	I1210 00:05:18.208176   83859 system_pods.go:74] duration metric: took 6.661729ms to wait for pod list to return data ...
	I1210 00:05:18.208187   83859 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:05:18.375431   83859 default_sa.go:45] found service account: "default"
	I1210 00:05:18.375458   83859 default_sa.go:55] duration metric: took 167.260728ms for default service account to be created ...
	I1210 00:05:18.375467   83859 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:05:18.578148   83859 system_pods.go:86] 9 kube-system pods found
	I1210 00:05:18.578176   83859 system_pods.go:89] "coredns-7c65d6cfc9-56djc" [2e25bad9-b88a-4ac8-a180-968bf6b057a2] Running
	I1210 00:05:18.578183   83859 system_pods.go:89] "coredns-7c65d6cfc9-8rxx7" [3fdfd41b-8ef7-41af-a703-ebecfe9ad319] Running
	I1210 00:05:18.578187   83859 system_pods.go:89] "etcd-no-preload-048296" [4447a7d6-df4b-4760-9352-4df0310017ee] Running
	I1210 00:05:18.578191   83859 system_pods.go:89] "kube-apiserver-no-preload-048296" [55a06182-1520-4497-8358-639438eeb297] Running
	I1210 00:05:18.578195   83859 system_pods.go:89] "kube-controller-manager-no-preload-048296" [f30bdb21-7742-46e7-90fe-403679f5e02a] Running
	I1210 00:05:18.578198   83859 system_pods.go:89] "kube-proxy-qklxb" [8bb029a1-abf9-4825-b9ec-0520a78cb3d8] Running
	I1210 00:05:18.578203   83859 system_pods.go:89] "kube-scheduler-no-preload-048296" [019d6d37-c586-4f12-8daf-b60752223cf1] Running
	I1210 00:05:18.578209   83859 system_pods.go:89] "metrics-server-6867b74b74-n2f8c" [8e9f56c9-fd67-4715-9148-1255be17f1fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:05:18.578213   83859 system_pods.go:89] "storage-provisioner" [55fe311f-4610-4805-9fb7-3f1cac7c96e6] Running
	I1210 00:05:18.578223   83859 system_pods.go:126] duration metric: took 202.750724ms to wait for k8s-apps to be running ...
	I1210 00:05:18.578233   83859 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:05:18.578277   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:05:18.592635   83859 system_svc.go:56] duration metric: took 14.391582ms WaitForService to wait for kubelet
	I1210 00:05:18.592669   83859 kubeadm.go:582] duration metric: took 7.814589832s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:05:18.592690   83859 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:05:18.776611   83859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:05:18.776632   83859 node_conditions.go:123] node cpu capacity is 2
	I1210 00:05:18.776642   83859 node_conditions.go:105] duration metric: took 183.94737ms to run NodePressure ...
	I1210 00:05:18.776654   83859 start.go:241] waiting for startup goroutines ...
	I1210 00:05:18.776660   83859 start.go:246] waiting for cluster config update ...
	I1210 00:05:18.776672   83859 start.go:255] writing updated cluster config ...
	I1210 00:05:18.776944   83859 ssh_runner.go:195] Run: rm -f paused
	I1210 00:05:18.826550   83859 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:05:18.828405   83859 out.go:177] * Done! kubectl is now configured to use "no-preload-048296" cluster and "default" namespace by default
	I1210 00:05:43.088214   84547 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 00:05:43.088352   84547 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 00:05:43.089912   84547 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 00:05:43.089988   84547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:05:43.090054   84547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:05:43.090140   84547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:05:43.090225   84547 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 00:05:43.090302   84547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:05:43.092050   84547 out.go:235]   - Generating certificates and keys ...
	I1210 00:05:43.092141   84547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:05:43.092210   84547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:05:43.092305   84547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:05:43.092392   84547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:05:43.092493   84547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:05:43.092569   84547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:05:43.092680   84547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:05:43.092761   84547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:05:43.092865   84547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:05:43.092975   84547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:05:43.093045   84547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:05:43.093143   84547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:05:43.093188   84547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:05:43.093236   84547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:05:43.093317   84547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:05:43.093402   84547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:05:43.093561   84547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:05:43.093728   84547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:05:43.093785   84547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:05:43.093855   84547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:05:43.095285   84547 out.go:235]   - Booting up control plane ...
	I1210 00:05:43.095396   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:05:43.095469   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:05:43.095525   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:05:43.095630   84547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:05:43.095804   84547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 00:05:43.095873   84547 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 00:05:43.095960   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096155   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096237   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096414   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096486   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096679   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096741   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096904   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096969   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.097122   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.097129   84547 kubeadm.go:310] 
	I1210 00:05:43.097167   84547 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 00:05:43.097202   84547 kubeadm.go:310] 		timed out waiting for the condition
	I1210 00:05:43.097211   84547 kubeadm.go:310] 
	I1210 00:05:43.097251   84547 kubeadm.go:310] 	This error is likely caused by:
	I1210 00:05:43.097280   84547 kubeadm.go:310] 		- The kubelet is not running
	I1210 00:05:43.097373   84547 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 00:05:43.097379   84547 kubeadm.go:310] 
	I1210 00:05:43.097465   84547 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 00:05:43.097495   84547 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 00:05:43.097522   84547 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 00:05:43.097531   84547 kubeadm.go:310] 
	I1210 00:05:43.097619   84547 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 00:05:43.097688   84547 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 00:05:43.097694   84547 kubeadm.go:310] 
	I1210 00:05:43.097794   84547 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 00:05:43.097875   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 00:05:43.097941   84547 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 00:05:43.098038   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 00:05:43.098124   84547 kubeadm.go:310] 
	W1210 00:05:43.098179   84547 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 00:05:43.098224   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:05:48.532537   84547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.434287494s)
	I1210 00:05:48.532615   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:05:48.546394   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:05:48.555650   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:05:48.555669   84547 kubeadm.go:157] found existing configuration files:
	
	I1210 00:05:48.555721   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:05:48.565301   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:05:48.565368   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:05:48.575330   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:05:48.584238   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:05:48.584324   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:05:48.593395   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:05:48.602150   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:05:48.602209   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:05:48.611177   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:05:48.619837   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:05:48.619907   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:05:48.629126   84547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:05:48.827680   84547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:07:44.857816   84547 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 00:07:44.857930   84547 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 00:07:44.859490   84547 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 00:07:44.859542   84547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:07:44.859627   84547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:07:44.859758   84547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:07:44.859894   84547 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 00:07:44.859953   84547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:07:44.861538   84547 out.go:235]   - Generating certificates and keys ...
	I1210 00:07:44.861657   84547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:07:44.861736   84547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:07:44.861809   84547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:07:44.861861   84547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:07:44.861920   84547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:07:44.861969   84547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:07:44.862024   84547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:07:44.862077   84547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:07:44.862143   84547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:07:44.862212   84547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:07:44.862246   84547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:07:44.862293   84547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:07:44.862343   84547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:07:44.862408   84547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:07:44.862505   84547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:07:44.862615   84547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:07:44.862765   84547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:07:44.862897   84547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:07:44.862955   84547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:07:44.863059   84547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:07:44.865485   84547 out.go:235]   - Booting up control plane ...
	I1210 00:07:44.865595   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:07:44.865720   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:07:44.865815   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:07:44.865929   84547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:07:44.866136   84547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 00:07:44.866209   84547 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 00:07:44.866332   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866517   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.866586   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866752   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.866818   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866976   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867042   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.867210   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867323   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.867512   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867522   84547 kubeadm.go:310] 
	I1210 00:07:44.867611   84547 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 00:07:44.867671   84547 kubeadm.go:310] 		timed out waiting for the condition
	I1210 00:07:44.867691   84547 kubeadm.go:310] 
	I1210 00:07:44.867729   84547 kubeadm.go:310] 	This error is likely caused by:
	I1210 00:07:44.867766   84547 kubeadm.go:310] 		- The kubelet is not running
	I1210 00:07:44.867880   84547 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 00:07:44.867895   84547 kubeadm.go:310] 
	I1210 00:07:44.868055   84547 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 00:07:44.868105   84547 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 00:07:44.868138   84547 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 00:07:44.868146   84547 kubeadm.go:310] 
	I1210 00:07:44.868250   84547 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 00:07:44.868354   84547 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 00:07:44.868362   84547 kubeadm.go:310] 
	I1210 00:07:44.868464   84547 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 00:07:44.868540   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 00:07:44.868606   84547 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 00:07:44.868689   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 00:07:44.868741   84547 kubeadm.go:310] 
	I1210 00:07:44.868762   84547 kubeadm.go:394] duration metric: took 8m1.943355039s to StartCluster
	I1210 00:07:44.868799   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:07:44.868852   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:07:44.906641   84547 cri.go:89] found id: ""
	I1210 00:07:44.906667   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.906675   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:07:44.906681   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:07:44.906734   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:07:44.942832   84547 cri.go:89] found id: ""
	I1210 00:07:44.942863   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.942872   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:07:44.942881   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:07:44.942945   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:07:44.978010   84547 cri.go:89] found id: ""
	I1210 00:07:44.978034   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.978042   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:07:44.978047   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:07:44.978108   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:07:45.017066   84547 cri.go:89] found id: ""
	I1210 00:07:45.017089   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.017097   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:07:45.017110   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:07:45.017172   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:07:45.049757   84547 cri.go:89] found id: ""
	I1210 00:07:45.049778   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.049786   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:07:45.049791   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:07:45.049842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:07:45.082754   84547 cri.go:89] found id: ""
	I1210 00:07:45.082789   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.082800   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:07:45.082807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:07:45.082933   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:07:45.117188   84547 cri.go:89] found id: ""
	I1210 00:07:45.117219   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.117227   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:07:45.117233   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:07:45.117302   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:07:45.153747   84547 cri.go:89] found id: ""
	I1210 00:07:45.153776   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.153785   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:07:45.153795   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:07:45.153810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:07:45.207625   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:07:45.207662   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:07:45.220929   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:07:45.220957   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:07:45.291850   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:07:45.291883   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:07:45.291899   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:07:45.397592   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:07:45.397629   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 00:07:45.446755   84547 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1210 00:07:45.446816   84547 out.go:270] * 
	W1210 00:07:45.446898   84547 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 00:07:45.446921   84547 out.go:270] * 
	W1210 00:07:45.448145   84547 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 00:07:45.452118   84547 out.go:201] 
	W1210 00:07:45.453335   84547 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 00:07:45.453398   84547 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 00:07:45.453438   84547 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 00:07:45.455704   84547 out.go:201] 
	
	
	==> CRI-O <==
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.843004787Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:2f2560bf3f170fe1fc85a677b8ae7d26b493f2d145d16e3c41ffd744e1717773,Verbose:false,}" file="otel-collector/interceptors.go:62" id=ad18fb86-7bc3-4dad-9694-1e9360c36b88 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.843123559Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:2f2560bf3f170fe1fc85a677b8ae7d26b493f2d145d16e3c41ffd744e1717773,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1733789065477134214,StartedAt:1733789065505399783,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7xpcc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cff7e56-1785-41c8-bd9c-db9d3f0bd05f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/6cff7e56-1785-41c8-bd9c-db9d3f0bd05f/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/6cff7e56-1785-41c8-bd9c-db9d3f0bd05f/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/6cff7e56-1785-41c8-bd9c-db9d3f0bd05f/containers/coredns/8e865e9a,Readonly:false,SelinuxRelabel:false,
Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/6cff7e56-1785-41c8-bd9c-db9d3f0bd05f/volumes/kubernetes.io~projected/kube-api-access-7w9p4,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7c65d6cfc9-7xpcc_6cff7e56-1785-41c8-bd9c-db9d3f0bd05f/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=ad18fb86-7bc3-4dad-9694-1e9360c36b88 name=/runtime.v1.RuntimeService/Container
Status
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.843504175Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:4985f7331836a5ae10fb2fb99513b4cc87b70875d16351a7466c968396c6968c,Verbose:false,}" file="otel-collector/interceptors.go:62" id=7062fd0b-3a65-414e-b9e0-c45334dd10b0 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.843639653Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:4985f7331836a5ae10fb2fb99513b4cc87b70875d16351a7466c968396c6968c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1733789065359555499,StartedAt:1733789065388648129,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.31.2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pj85d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1b9b056-f4a3-419c-86fa-a94d88464f74,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d1b9b056-f4a3-419c-86fa-a94d88464f74/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d1b9b056-f4a3-419c-86fa-a94d88464f74/containers/kube-proxy/6dd5b2b6,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,Hos
tPath:/var/lib/kubelet/pods/d1b9b056-f4a3-419c-86fa-a94d88464f74/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/d1b9b056-f4a3-419c-86fa-a94d88464f74/volumes/kubernetes.io~projected/kube-api-access-mkjql,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-pj85d_d1b9b056-f4a3-419c-86fa-a94d88464f74/kube-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" f
ile="otel-collector/interceptors.go:74" id=7062fd0b-3a65-414e-b9e0-c45334dd10b0 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.844046244Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:6399bf0bce56bda89140073b8d361b4afe760a3a5d0c89f5dd4af1cab20ab37a,Verbose:false,}" file="otel-collector/interceptors.go:62" id=904eeb6d-3aa0-4ca1-9a19-f828716c2952 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.844140769Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:6399bf0bce56bda89140073b8d361b4afe760a3a5d0c89f5dd4af1cab20ab37a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1733789065217264788,StartedAt:1733789065250389922,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z2n25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b81952-7281-4705-9536-06eb939a5807,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/b6b81952-7281-4705-9536-06eb939a5807/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b6b81952-7281-4705-9536-06eb939a5807/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b6b81952-7281-4705-9536-06eb939a5807/containers/coredns/971d46a0,Readonly:false,SelinuxRelabel:false,
Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/b6b81952-7281-4705-9536-06eb939a5807/volumes/kubernetes.io~projected/kube-api-access-xgxfz,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7c65d6cfc9-z2n25_b6b81952-7281-4705-9536-06eb939a5807/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=904eeb6d-3aa0-4ca1-9a19-f828716c2952 name=/runtime.v1.RuntimeService/Container
Status
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.845046071Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:f384fa1da72a45dcdbc35cfd2f4fa3ab98712f0cc2d80629ce52fbb0c443f0cc,Verbose:false,}" file="otel-collector/interceptors.go:62" id=a816038a-7be1-40a3-9edd-787429c4533a name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.845154944Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:f384fa1da72a45dcdbc35cfd2f4fa3ab98712f0cc2d80629ce52fbb0c443f0cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1733789064905297722,StartedAt:1733789064935638580,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea716edd-4030-4ec3-b094-c3a50154b473,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/ea716edd-4030-4ec3-b094-c3a50154b473/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/ea716edd-4030-4ec3-b094-c3a50154b473/containers/storage-provisioner/860fa93d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/ea716edd-4030-4ec3-b094-c3a50154b473/volumes/kubernetes.io~projected/kube-api-access-wv9hc,Readonly:true,SelinuxR
elabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_ea716edd-4030-4ec3-b094-c3a50154b473/storage-provisioner/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=a816038a-7be1-40a3-9edd-787429c4533a name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.845869498Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:ffcfbdf5799fe9759c9be24bb1a4680940136ba17437b3687fad21f9dd1978f6,Verbose:false,}" file="otel-collector/interceptors.go:62" id=bf30a163-c6ba-495b-a80a-9d4dcbe50b89 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.846079447Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:ffcfbdf5799fe9759c9be24bb1a4680940136ba17437b3687fad21f9dd1978f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1733789052619941796,StartedAt:1733789052726064627,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.31.2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985a02d3bfa184443d7fb95235dee937,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/985a02d3bfa184443d7fb95235dee937/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/985a02d3bfa184443d7fb95235dee937/containers/kube-controller-manager/90aadace,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagati
on:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-default-k8s-diff-port-871210_985a02d3bfa184443d7fb95235dee937/kube-controller-manager/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,Oom
ScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=bf30a163-c6ba-495b-a80a-9d4dcbe50b89 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.846996757Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:5e017b3720454255e4a31ba485c6f73a540a911ce08804b8bd412b6fa0ae669a,Verbose:false,}" file="otel-collector/interceptors.go:62" id=651a24e0-5eb5-4656-be91-7bc5998554eb name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.847094337Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:5e017b3720454255e4a31ba485c6f73a540a911ce08804b8bd412b6fa0ae669a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1733789052609489260,StartedAt:1733789052665293868,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.31.2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e24f0391c006e0575694a0e26b27d9e,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/5e24f0391c006e0575694a0e26b27d9e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/5e24f0391c006e0575694a0e26b27d9e/containers/kube-apiserver/0549d870,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapp
ing{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-default-k8s-diff-port-871210_5e24f0391c006e0575694a0e26b27d9e/kube-apiserver/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=651a24e0-5eb5-4656-be91-7bc5998554eb name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.847826581Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:de46d5ff86dd5461c31f73b51f91a26504e1bc4d47ae016ad46487c82e4a8a62,Verbose:false,}" file="otel-collector/interceptors.go:62" id=df919f5d-347f-4be3-b58b-4c07788c5b4b name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.847934487Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:de46d5ff86dd5461c31f73b51f91a26504e1bc4d47ae016ad46487c82e4a8a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1733789052601779446,StartedAt:1733789052722427208,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.31.2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170d4ffdb0ab37daa7cc398387a6b976,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/170d4ffdb0ab37daa7cc398387a6b976/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/170d4ffdb0ab37daa7cc398387a6b976/containers/kube-scheduler/ea72f857,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-default-k8s-diff-port-871210_170d4ffdb0ab37daa7cc398387a6b976/kube-scheduler/2.log,Resources:&ContainerResources{Lin
ux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=df919f5d-347f-4be3-b58b-4c07788c5b4b name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.848293747Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:36d1e8debce6d6d89cfaa6bd5a97b1694d5bc200ec750caa964110e95ef3820e,Verbose:false,}" file="otel-collector/interceptors.go:62" id=42c76c10-f3ee-4338-aa05-b5bff6bfdd21 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.848402581Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:36d1e8debce6d6d89cfaa6bd5a97b1694d5bc200ec750caa964110e95ef3820e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1733789052594745348,StartedAt:1733789052721702850,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.15-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 779d4b3f1d29fa0566dfa9ae56e9ccf9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/779d4b3f1d29fa0566dfa9ae56e9ccf9/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/779d4b3f1d29fa0566dfa9ae56e9ccf9/containers/etcd/7160ba66,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/
pods/kube-system_etcd-default-k8s-diff-port-871210_779d4b3f1d29fa0566dfa9ae56e9ccf9/etcd/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=42c76c10-f3ee-4338-aa05-b5bff6bfdd21 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.860250014Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=28709c06-cf90-47bc-a8c1-01cb58da71af name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.861001390Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789617860971651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=28709c06-cf90-47bc-a8c1-01cb58da71af name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.862788512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1cc3fb2-f4be-4a93-97bc-70be28ec8678 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.862860956Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1cc3fb2-f4be-4a93-97bc-70be28ec8678 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.864672453Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ef83ec3-e08d-4f60-bfb3-402132e1614f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.865083667Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789617865065109,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ef83ec3-e08d-4f60-bfb3-402132e1614f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.865714249Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84f3cff0-f474-4e5b-92aa-28826b0aca41 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.865764847Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84f3cff0-f474-4e5b-92aa-28826b0aca41 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:37 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:13:37.865962720Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f2560bf3f170fe1fc85a677b8ae7d26b493f2d145d16e3c41ffd744e1717773,PodSandboxId:2d35ff9b0b4e7e5a52a64316c62f56748c42ea5ec7190c2e1138f5849f9fc685,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789065436280195,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7xpcc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cff7e56-1785-41c8-bd9c-db9d3f0bd05f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4985f7331836a5ae10fb2fb99513b4cc87b70875d16351a7466c968396c6968c,PodSandboxId:23d078bdf6810b0c8c165535a6e1cdb5a2cbdba7ddbb34aead976639142e494d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789065278412892,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pj85d,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d1b9b056-f4a3-419c-86fa-a94d88464f74,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6399bf0bce56bda89140073b8d361b4afe760a3a5d0c89f5dd4af1cab20ab37a,PodSandboxId:19d9c622baf2311bd5365878c281a9e7c300a3af42bf12aace93141df867f1cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789065177051539,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z2n25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b81952-728
1-4705-9536-06eb939a5807,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f384fa1da72a45dcdbc35cfd2f4fa3ab98712f0cc2d80629ce52fbb0c443f0cc,PodSandboxId:69aed00250100553182ed7483ccc5c286d23454a6ece9eeeb11eabd393e4d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1
733789064871716208,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea716edd-4030-4ec3-b094-c3a50154b473,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcfbdf5799fe9759c9be24bb1a4680940136ba17437b3687fad21f9dd1978f6,PodSandboxId:52b456845966d34aa556f24e7298067eed0081c5216a6a40c37c26e6b4c851a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173378905
2529970142,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985a02d3bfa184443d7fb95235dee937,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e017b3720454255e4a31ba485c6f73a540a911ce08804b8bd412b6fa0ae669a,PodSandboxId:71b1ea323f87218f11fffd4c2ab04fd63d61946c4f7035501747a6e55252b314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,Cre
atedAt:1733789052545410929,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e24f0391c006e0575694a0e26b27d9e,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de46d5ff86dd5461c31f73b51f91a26504e1bc4d47ae016ad46487c82e4a8a62,PodSandboxId:0b23ebb38c2037752681775a18b8dde5f4bea1da7d2f4c7c28e3bda1282d9306,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Creat
edAt:1733789052542786778,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170d4ffdb0ab37daa7cc398387a6b976,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d1e8debce6d6d89cfaa6bd5a97b1694d5bc200ec750caa964110e95ef3820e,PodSandboxId:d595c29971d8383af6de543deb71112c2d2c01c8ff562e900f221de0be5f9331,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789
052497110116,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 779d4b3f1d29fa0566dfa9ae56e9ccf9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a6da443a267e57163cf919f914e2fce3e9ee54cf7ae17e914e1febf275171b,PodSandboxId:907ec5e7689c40ad35f5260d8ca5846b1f8315104ff491a5a7423506fab033e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733788763997142612,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e24f0391c006e0575694a0e26b27d9e,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84f3cff0-f474-4e5b-92aa-28826b0aca41 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2f2560bf3f170       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   2d35ff9b0b4e7       coredns-7c65d6cfc9-7xpcc
	4985f7331836a       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   23d078bdf6810       kube-proxy-pj85d
	6399bf0bce56b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   19d9c622baf23       coredns-7c65d6cfc9-z2n25
	f384fa1da72a4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   69aed00250100       storage-provisioner
	5e017b3720454       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   71b1ea323f872       kube-apiserver-default-k8s-diff-port-871210
	de46d5ff86dd5       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   0b23ebb38c203       kube-scheduler-default-k8s-diff-port-871210
	ffcfbdf5799fe       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   52b456845966d       kube-controller-manager-default-k8s-diff-port-871210
	36d1e8debce6d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   d595c29971d83       etcd-default-k8s-diff-port-871210
	35a6da443a267       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   907ec5e7689c4       kube-apiserver-default-k8s-diff-port-871210
	
	
	==> coredns [2f2560bf3f170fe1fc85a677b8ae7d26b493f2d145d16e3c41ffd744e1717773] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [6399bf0bce56bda89140073b8d361b4afe760a3a5d0c89f5dd4af1cab20ab37a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-871210
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-871210
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=default-k8s-diff-port-871210
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_10T00_04_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:04:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-871210
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:13:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:04:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:04:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:04:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:04:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.54
	  Hostname:    default-k8s-diff-port-871210
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f85c6d29444243079d72aa1918e9bb64
	  System UUID:                f85c6d29-4442-4307-9d72-aa1918e9bb64
	  Boot ID:                    917a833d-0235-453d-a23c-4ce687ec67e0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-7xpcc                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m16s
	  kube-system                 coredns-7c65d6cfc9-z2n25                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m16s
	  kube-system                 etcd-default-k8s-diff-port-871210                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-871210             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m22s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-871210    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m21s
	  kube-system                 kube-proxy-pj85d                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-scheduler-default-k8s-diff-port-871210             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m21s
	  kube-system                 metrics-server-6867b74b74-7g2qm                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m15s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m12s  kube-proxy       
	  Normal  Starting                 9m21s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s  kubelet          Node default-k8s-diff-port-871210 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s  kubelet          Node default-k8s-diff-port-871210 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s  kubelet          Node default-k8s-diff-port-871210 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m17s  node-controller  Node default-k8s-diff-port-871210 event: Registered Node default-k8s-diff-port-871210 in Controller
	
	
	==> dmesg <==
	[  +0.037298] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec 9 23:59] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.048240] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.609863] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.698062] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.062342] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072249] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.221911] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.164973] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.310278] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +4.184737] systemd-fstab-generator[805]: Ignoring "noauto" option for root device
	[  +0.063522] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.067786] systemd-fstab-generator[923]: Ignoring "noauto" option for root device
	[  +5.570506] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.303795] kauditd_printk_skb: 54 callbacks suppressed
	[ +23.589811] kauditd_printk_skb: 31 callbacks suppressed
	[Dec10 00:04] systemd-fstab-generator[2628]: Ignoring "noauto" option for root device
	[  +0.064939] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.985788] systemd-fstab-generator[2947]: Ignoring "noauto" option for root device
	[  +0.078498] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.264207] systemd-fstab-generator[3075]: Ignoring "noauto" option for root device
	[  +0.111509] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.362547] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [36d1e8debce6d6d89cfaa6bd5a97b1694d5bc200ec750caa964110e95ef3820e] <==
	{"level":"info","ts":"2024-12-10T00:04:12.872048Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-10T00:04:12.872256Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"5f41dc21f7a6c607","initial-advertise-peer-urls":["https://192.168.72.54:2380"],"listen-peer-urls":["https://192.168.72.54:2380"],"advertise-client-urls":["https://192.168.72.54:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.54:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-10T00:04:12.872291Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-10T00:04:12.872388Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.54:2380"}
	{"level":"info","ts":"2024-12-10T00:04:12.872409Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.54:2380"}
	{"level":"info","ts":"2024-12-10T00:04:13.146657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-10T00:04:13.146732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-10T00:04:13.146749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 received MsgPreVoteResp from 5f41dc21f7a6c607 at term 1"}
	{"level":"info","ts":"2024-12-10T00:04:13.146760Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 became candidate at term 2"}
	{"level":"info","ts":"2024-12-10T00:04:13.146766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 received MsgVoteResp from 5f41dc21f7a6c607 at term 2"}
	{"level":"info","ts":"2024-12-10T00:04:13.146774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 became leader at term 2"}
	{"level":"info","ts":"2024-12-10T00:04:13.146782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5f41dc21f7a6c607 elected leader 5f41dc21f7a6c607 at term 2"}
	{"level":"info","ts":"2024-12-10T00:04:13.150791Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"5f41dc21f7a6c607","local-member-attributes":"{Name:default-k8s-diff-port-871210 ClientURLs:[https://192.168.72.54:2379]}","request-path":"/0/members/5f41dc21f7a6c607/attributes","cluster-id":"770d524238a76c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-10T00:04:13.150928Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T00:04:13.151505Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:04:13.153610Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T00:04:13.158148Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T00:04:13.159618Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-10T00:04:13.159651Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-10T00:04:13.160027Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-10T00:04:13.160430Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"770d524238a76c54","local-member-id":"5f41dc21f7a6c607","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:04:13.175015Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:04:13.175103Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:04:13.182688Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T00:04:13.183404Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.54:2379"}
	
	
	==> kernel <==
	 00:13:38 up 14 min,  0 users,  load average: 0.04, 0.12, 0.09
	Linux default-k8s-diff-port-871210 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [35a6da443a267e57163cf919f914e2fce3e9ee54cf7ae17e914e1febf275171b] <==
	W1210 00:04:08.974464       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.099079       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.114683       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.171180       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.211662       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.273471       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.284524       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.319280       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.388108       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.492509       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.540249       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.565097       1 logging.go:55] [core] [Channel #13 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.591436       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.592735       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.643331       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.690831       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.800514       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.853394       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.864933       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.891525       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.892853       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:10.132229       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:10.146953       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:10.150446       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:10.186095       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [5e017b3720454255e4a31ba485c6f73a540a911ce08804b8bd412b6fa0ae669a] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1210 00:09:16.159013       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:09:16.159076       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 00:09:16.160027       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 00:09:16.161170       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 00:10:16.160724       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:10:16.160841       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1210 00:10:16.161882       1 handler_proxy.go:99] no RequestInfo found in the context
	I1210 00:10:16.161906       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1210 00:10:16.162007       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 00:10:16.164056       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 00:12:16.163241       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:12:16.163436       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1210 00:12:16.164406       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:12:16.164442       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 00:12:16.165438       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 00:12:16.165490       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ffcfbdf5799fe9759c9be24bb1a4680940136ba17437b3687fad21f9dd1978f6] <==
	E1210 00:08:22.111063       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:08:22.544287       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:08:52.117349       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:08:52.551401       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:09:22.122949       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:09:22.557874       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 00:09:33.371239       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-871210"
	E1210 00:09:52.129129       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:09:52.565713       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:10:22.134541       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:10:22.572272       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 00:10:25.717135       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="90.415µs"
	I1210 00:10:41.715946       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="89.334µs"
	E1210 00:10:52.140506       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:10:52.579770       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:11:22.146177       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:11:22.586359       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:11:52.151512       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:11:52.593429       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:12:22.158829       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:12:22.602076       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:12:52.164528       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:12:52.609526       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:13:22.170030       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:13:22.616920       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4985f7331836a5ae10fb2fb99513b4cc87b70875d16351a7466c968396c6968c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1210 00:04:25.570215       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1210 00:04:25.585515       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.54"]
	E1210 00:04:25.585709       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 00:04:25.616681       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1210 00:04:25.616788       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 00:04:25.616859       1 server_linux.go:169] "Using iptables Proxier"
	I1210 00:04:25.619853       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 00:04:25.620693       1 server.go:483] "Version info" version="v1.31.2"
	I1210 00:04:25.620742       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 00:04:25.622281       1 config.go:199] "Starting service config controller"
	I1210 00:04:25.622348       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1210 00:04:25.622389       1 config.go:105] "Starting endpoint slice config controller"
	I1210 00:04:25.622405       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1210 00:04:25.622937       1 config.go:328] "Starting node config controller"
	I1210 00:04:25.624467       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1210 00:04:25.722801       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1210 00:04:25.722860       1 shared_informer.go:320] Caches are synced for service config
	I1210 00:04:25.724636       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [de46d5ff86dd5461c31f73b51f91a26504e1bc4d47ae016ad46487c82e4a8a62] <==
	W1210 00:04:15.208028       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1210 00:04:15.213788       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1210 00:04:16.018431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1210 00:04:16.018468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:04:16.049597       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 00:04:16.049651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 00:04:16.065250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1210 00:04:16.065387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:04:16.136519       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1210 00:04:16.136779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:04:16.167347       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1210 00:04:16.167538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 00:04:16.169733       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1210 00:04:16.169821       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 00:04:16.249896       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1210 00:04:16.250000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:04:16.381496       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1210 00:04:16.381546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:04:16.455587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1210 00:04:16.455697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 00:04:16.458273       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1210 00:04:16.458373       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1210 00:04:16.469931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1210 00:04:16.469989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1210 00:04:19.565482       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 00:12:27 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:12:27.845364    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789547844692187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:34 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:12:34.701474    2954 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7g2qm" podUID="49ac129a-c85d-4af1-b3b2-06bc10bced77"
	Dec 10 00:12:37 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:12:37.847394    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789557846780788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:37 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:12:37.847795    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789557846780788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:47 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:12:47.701821    2954 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7g2qm" podUID="49ac129a-c85d-4af1-b3b2-06bc10bced77"
	Dec 10 00:12:47 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:12:47.849693    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789567849199621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:47 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:12:47.849786    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789567849199621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:57 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:12:57.852239    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789577851905715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:57 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:12:57.852627    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789577851905715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:58 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:12:58.701143    2954 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7g2qm" podUID="49ac129a-c85d-4af1-b3b2-06bc10bced77"
	Dec 10 00:13:07 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:13:07.854790    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789587854318437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:13:07 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:13:07.854831    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789587854318437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:13:13 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:13:13.701237    2954 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7g2qm" podUID="49ac129a-c85d-4af1-b3b2-06bc10bced77"
	Dec 10 00:13:17 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:13:17.720142    2954 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 10 00:13:17 default-k8s-diff-port-871210 kubelet[2954]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 10 00:13:17 default-k8s-diff-port-871210 kubelet[2954]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 10 00:13:17 default-k8s-diff-port-871210 kubelet[2954]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 00:13:17 default-k8s-diff-port-871210 kubelet[2954]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 00:13:17 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:13:17.856503    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789597856097310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:13:17 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:13:17.856611    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789597856097310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:13:27 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:13:27.858319    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789607858050981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:13:27 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:13:27.858648    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789607858050981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:13:28 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:13:28.701492    2954 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7g2qm" podUID="49ac129a-c85d-4af1-b3b2-06bc10bced77"
	Dec 10 00:13:37 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:13:37.861329    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789617860971651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:13:37 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:13:37.861375    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789617860971651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [f384fa1da72a45dcdbc35cfd2f4fa3ab98712f0cc2d80629ce52fbb0c443f0cc] <==
	I1210 00:04:24.953013       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 00:04:24.979757       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 00:04:24.979880       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 00:04:25.076540       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 00:04:25.114170       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-871210_9fdfc0dc-d5c6-480c-a56d-52cdebcd82e0!
	I1210 00:04:25.116809       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"85baa87c-b15b-4bc6-84f8-e3b16b53ecdd", APIVersion:"v1", ResourceVersion:"385", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-871210_9fdfc0dc-d5c6-480c-a56d-52cdebcd82e0 became leader
	I1210 00:04:25.215297       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-871210_9fdfc0dc-d5c6-480c-a56d-52cdebcd82e0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-871210 -n default-k8s-diff-port-871210
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-871210 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-7g2qm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-871210 describe pod metrics-server-6867b74b74-7g2qm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-871210 describe pod metrics-server-6867b74b74-7g2qm: exit status 1 (78.04417ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-7g2qm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-871210 describe pod metrics-server-6867b74b74-7g2qm: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1210 00:05:26.332798   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:05:48.762604   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:06:33.103167   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:07:11.828961   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-048296 -n no-preload-048296
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-10 00:14:19.345671048 +0000 UTC m=+6143.777286145
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-048296 -n no-preload-048296
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-048296 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-048296 logs -n 25: (2.026833197s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-030585 sudo cat                              | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo find                             | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo crio                             | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-030585                                       | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-866797 | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | disable-driver-mounts-866797                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:52 UTC |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-048296             | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC | 09 Dec 24 23:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-825613            | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC | 09 Dec 24 23:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-825613                                  | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-871210  | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:52 UTC | 09 Dec 24 23:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:52 UTC |                     |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-720064        | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-048296                  | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-825613                 | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC | 10 Dec 24 00:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-825613                                  | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC | 10 Dec 24 00:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-871210       | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:54 UTC | 10 Dec 24 00:04 UTC |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-720064             | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:55:25
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:55:25.509412   84547 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:55:25.509516   84547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:25.509524   84547 out.go:358] Setting ErrFile to fd 2...
	I1209 23:55:25.509541   84547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:25.509720   84547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 23:55:25.510225   84547 out.go:352] Setting JSON to false
	I1209 23:55:25.511117   84547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9476,"bootTime":1733779049,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:55:25.511206   84547 start.go:139] virtualization: kvm guest
	I1209 23:55:25.513214   84547 out.go:177] * [old-k8s-version-720064] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:55:25.514823   84547 notify.go:220] Checking for updates...
	I1209 23:55:25.514845   84547 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:55:25.516067   84547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:55:25.517350   84547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:55:25.518678   84547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 23:55:25.520082   84547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:55:25.521432   84547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:55:25.523092   84547 config.go:182] Loaded profile config "old-k8s-version-720064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 23:55:25.523499   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:55:25.523548   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:55:25.538209   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36805
	I1209 23:55:25.538728   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:55:25.539263   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:55:25.539282   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:55:25.539570   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:55:25.539735   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:55:25.541550   84547 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1209 23:55:25.542864   84547 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:55:25.543159   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:55:25.543196   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:55:25.558672   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I1209 23:55:25.559075   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:55:25.559524   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:55:25.559547   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:55:25.559881   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:55:25.560082   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:55:25.595916   84547 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 23:55:25.597365   84547 start.go:297] selected driver: kvm2
	I1209 23:55:25.597380   84547 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:55:25.597473   84547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:55:25.598182   84547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:55:25.598251   84547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 23:55:25.612993   84547 install.go:137] /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1209 23:55:25.613401   84547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:55:25.613430   84547 cni.go:84] Creating CNI manager for ""
	I1209 23:55:25.613473   84547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:55:25.613509   84547 start.go:340] cluster config:
	{Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:55:25.613608   84547 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:55:25.615371   84547 out.go:177] * Starting "old-k8s-version-720064" primary control-plane node in "old-k8s-version-720064" cluster
	I1209 23:55:25.371778   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:25.616599   84547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:55:25.616629   84547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1209 23:55:25.616636   84547 cache.go:56] Caching tarball of preloaded images
	I1209 23:55:25.616716   84547 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 23:55:25.616727   84547 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1209 23:55:25.616809   84547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/config.json ...
	I1209 23:55:25.616984   84547 start.go:360] acquireMachinesLock for old-k8s-version-720064: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:55:28.439810   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:34.519811   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:37.591794   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:43.671858   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:46.743891   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:52.823826   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:55.895854   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:01.975845   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:05.047862   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:11.127840   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:14.199834   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:20.279853   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:23.351850   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:29.431829   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:32.503859   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:38.583822   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:41.655831   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:47.735829   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:50.807862   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:56.887827   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:59.959820   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:06.039852   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:09.111843   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:15.191823   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:18.263824   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:24.343852   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:27.415807   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:33.495843   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:36.567854   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:42.647854   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:45.719902   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:51.799842   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:54.871862   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:00.951877   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:04.023859   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:10.103869   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:13.175788   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:19.255805   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:22.327784   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:28.407842   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:31.479864   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:34.483630   83900 start.go:364] duration metric: took 4m35.142298634s to acquireMachinesLock for "embed-certs-825613"
	I1209 23:58:34.483690   83900 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:58:34.483698   83900 fix.go:54] fixHost starting: 
	I1209 23:58:34.484038   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:58:34.484080   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:58:34.499267   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I1209 23:58:34.499717   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:58:34.500208   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:58:34.500236   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:58:34.500552   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:58:34.500747   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:34.500886   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:58:34.502318   83900 fix.go:112] recreateIfNeeded on embed-certs-825613: state=Stopped err=<nil>
	I1209 23:58:34.502340   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	W1209 23:58:34.502480   83900 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:58:34.504290   83900 out.go:177] * Restarting existing kvm2 VM for "embed-certs-825613" ...
	I1209 23:58:34.481505   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:58:34.481545   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:58:34.481889   83859 buildroot.go:166] provisioning hostname "no-preload-048296"
	I1209 23:58:34.481917   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:58:34.482120   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:58:34.483492   83859 machine.go:96] duration metric: took 4m37.418278223s to provisionDockerMachine
	I1209 23:58:34.483534   83859 fix.go:56] duration metric: took 4m37.439725581s for fixHost
	I1209 23:58:34.483542   83859 start.go:83] releasing machines lock for "no-preload-048296", held for 4m37.439753726s
	W1209 23:58:34.483579   83859 start.go:714] error starting host: provision: host is not running
	W1209 23:58:34.483682   83859 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1209 23:58:34.483694   83859 start.go:729] Will try again in 5 seconds ...
	I1209 23:58:34.505520   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Start
	I1209 23:58:34.505676   83900 main.go:141] libmachine: (embed-certs-825613) Ensuring networks are active...
	I1209 23:58:34.506381   83900 main.go:141] libmachine: (embed-certs-825613) Ensuring network default is active
	I1209 23:58:34.506676   83900 main.go:141] libmachine: (embed-certs-825613) Ensuring network mk-embed-certs-825613 is active
	I1209 23:58:34.507046   83900 main.go:141] libmachine: (embed-certs-825613) Getting domain xml...
	I1209 23:58:34.507727   83900 main.go:141] libmachine: (embed-certs-825613) Creating domain...
	I1209 23:58:35.719784   83900 main.go:141] libmachine: (embed-certs-825613) Waiting to get IP...
	I1209 23:58:35.720797   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:35.721325   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:35.721377   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:35.721289   85186 retry.go:31] will retry after 251.225165ms: waiting for machine to come up
	I1209 23:58:35.973801   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:35.974247   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:35.974276   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:35.974205   85186 retry.go:31] will retry after 298.029679ms: waiting for machine to come up
	I1209 23:58:36.273658   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:36.274090   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:36.274119   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:36.274049   85186 retry.go:31] will retry after 467.217404ms: waiting for machine to come up
	I1209 23:58:36.742591   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:36.743027   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:36.743052   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:36.742982   85186 retry.go:31] will retry after 409.741731ms: waiting for machine to come up
	I1209 23:58:37.154543   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:37.154958   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:37.154982   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:37.154916   85186 retry.go:31] will retry after 723.02759ms: waiting for machine to come up
	I1209 23:58:37.878986   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:37.879474   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:37.879507   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:37.879423   85186 retry.go:31] will retry after 767.399861ms: waiting for machine to come up
	I1209 23:58:38.648416   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:38.648903   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:38.648927   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:38.648853   85186 retry.go:31] will retry after 1.06423499s: waiting for machine to come up
	I1209 23:58:39.485363   83859 start.go:360] acquireMachinesLock for no-preload-048296: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:58:39.714711   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:39.715114   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:39.715147   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:39.715058   85186 retry.go:31] will retry after 1.402884688s: waiting for machine to come up
	I1209 23:58:41.119828   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:41.120315   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:41.120359   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:41.120258   85186 retry.go:31] will retry after 1.339874475s: waiting for machine to come up
	I1209 23:58:42.461858   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:42.462314   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:42.462343   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:42.462261   85186 retry.go:31] will retry after 1.455809273s: waiting for machine to come up
	I1209 23:58:43.920097   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:43.920418   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:43.920448   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:43.920371   85186 retry.go:31] will retry after 2.872969061s: waiting for machine to come up
	I1209 23:58:46.796163   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:46.796477   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:46.796499   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:46.796439   85186 retry.go:31] will retry after 2.530677373s: waiting for machine to come up
	I1209 23:58:49.330146   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:49.330601   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:49.330631   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:49.330561   85186 retry.go:31] will retry after 4.492372507s: waiting for machine to come up
	I1209 23:58:53.827982   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.828461   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has current primary IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.828480   83900 main.go:141] libmachine: (embed-certs-825613) Found IP for machine: 192.168.50.19
	I1209 23:58:53.828492   83900 main.go:141] libmachine: (embed-certs-825613) Reserving static IP address...
	I1209 23:58:53.829001   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "embed-certs-825613", mac: "52:54:00:0f:2e:da", ip: "192.168.50.19"} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.829028   83900 main.go:141] libmachine: (embed-certs-825613) Reserved static IP address: 192.168.50.19
	I1209 23:58:53.829051   83900 main.go:141] libmachine: (embed-certs-825613) DBG | skip adding static IP to network mk-embed-certs-825613 - found existing host DHCP lease matching {name: "embed-certs-825613", mac: "52:54:00:0f:2e:da", ip: "192.168.50.19"}
	I1209 23:58:53.829067   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Getting to WaitForSSH function...
	I1209 23:58:53.829080   83900 main.go:141] libmachine: (embed-certs-825613) Waiting for SSH to be available...
	I1209 23:58:53.831079   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.831430   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.831462   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.831630   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Using SSH client type: external
	I1209 23:58:53.831682   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa (-rw-------)
	I1209 23:58:53.831723   83900 main.go:141] libmachine: (embed-certs-825613) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:58:53.831752   83900 main.go:141] libmachine: (embed-certs-825613) DBG | About to run SSH command:
	I1209 23:58:53.831765   83900 main.go:141] libmachine: (embed-certs-825613) DBG | exit 0
	I1209 23:58:53.959446   83900 main.go:141] libmachine: (embed-certs-825613) DBG | SSH cmd err, output: <nil>: 
	I1209 23:58:53.959864   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetConfigRaw
	I1209 23:58:53.960515   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:53.963227   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.963614   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.963644   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.963889   83900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/config.json ...
	I1209 23:58:53.964086   83900 machine.go:93] provisionDockerMachine start ...
	I1209 23:58:53.964103   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:53.964299   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:53.966516   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.966803   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.966834   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.966959   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:53.967149   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:53.967292   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:53.967428   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:53.967599   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:53.967824   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:53.967839   83900 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:58:54.079701   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:58:54.079732   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetMachineName
	I1209 23:58:54.080045   83900 buildroot.go:166] provisioning hostname "embed-certs-825613"
	I1209 23:58:54.080079   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetMachineName
	I1209 23:58:54.080333   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.082930   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.083283   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.083321   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.083420   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.083657   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.083834   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.083974   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.084095   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.084269   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.084282   83900 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-825613 && echo "embed-certs-825613" | sudo tee /etc/hostname
	I1209 23:58:54.208724   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-825613
	
	I1209 23:58:54.208754   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.211739   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.212095   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.212122   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.212297   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.212513   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.212763   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.212936   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.213102   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.213285   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.213311   83900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-825613' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-825613/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-825613' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:58:55.043989   84259 start.go:364] duration metric: took 4m2.194304919s to acquireMachinesLock for "default-k8s-diff-port-871210"
	I1209 23:58:55.044059   84259 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:58:55.044067   84259 fix.go:54] fixHost starting: 
	I1209 23:58:55.044480   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:58:55.044546   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:58:55.060908   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45091
	I1209 23:58:55.061391   84259 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:58:55.061954   84259 main.go:141] libmachine: Using API Version  1
	I1209 23:58:55.061982   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:58:55.062346   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:58:55.062508   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:58:55.062674   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1209 23:58:55.064365   84259 fix.go:112] recreateIfNeeded on default-k8s-diff-port-871210: state=Stopped err=<nil>
	I1209 23:58:55.064386   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	W1209 23:58:55.064564   84259 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:58:55.066707   84259 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-871210" ...
	I1209 23:58:54.331911   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:58:54.331941   83900 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:58:54.331966   83900 buildroot.go:174] setting up certificates
	I1209 23:58:54.331978   83900 provision.go:84] configureAuth start
	I1209 23:58:54.331991   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetMachineName
	I1209 23:58:54.332275   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:54.334597   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.334926   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.334957   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.335117   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.337393   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.337731   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.337763   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.337876   83900 provision.go:143] copyHostCerts
	I1209 23:58:54.337923   83900 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:58:54.337936   83900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:58:54.338006   83900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:58:54.338093   83900 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:58:54.338101   83900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:58:54.338127   83900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:58:54.338204   83900 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:58:54.338212   83900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:58:54.338233   83900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:58:54.338286   83900 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.embed-certs-825613 san=[127.0.0.1 192.168.50.19 embed-certs-825613 localhost minikube]
	I1209 23:58:54.423610   83900 provision.go:177] copyRemoteCerts
	I1209 23:58:54.423679   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:58:54.423706   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.426695   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.427009   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.427041   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.427144   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.427332   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.427516   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.427706   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:54.513991   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 23:58:54.536700   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:58:54.558541   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:58:54.580319   83900 provision.go:87] duration metric: took 248.326924ms to configureAuth
	I1209 23:58:54.580359   83900 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:58:54.580568   83900 config.go:182] Loaded profile config "embed-certs-825613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:58:54.580652   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.583347   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.583720   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.583748   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.583963   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.584171   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.584327   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.584481   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.584621   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.584775   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.584790   83900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:58:54.804814   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:58:54.804860   83900 machine.go:96] duration metric: took 840.759848ms to provisionDockerMachine
	I1209 23:58:54.804876   83900 start.go:293] postStartSetup for "embed-certs-825613" (driver="kvm2")
	I1209 23:58:54.804891   83900 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:58:54.804916   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:54.805269   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:58:54.805297   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.807977   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.808340   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.808370   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.808505   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.808765   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.808945   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.809100   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:54.893741   83900 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:58:54.897936   83900 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:58:54.897961   83900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:58:54.898065   83900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:58:54.898145   83900 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:58:54.898235   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:58:54.907056   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:58:54.929618   83900 start.go:296] duration metric: took 124.726532ms for postStartSetup
	I1209 23:58:54.929664   83900 fix.go:56] duration metric: took 20.445966428s for fixHost
	I1209 23:58:54.929711   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.932476   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.932866   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.932893   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.933108   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.933334   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.933523   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.933638   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.933747   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.933956   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.933967   83900 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:58:55.043841   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788735.012193373
	
	I1209 23:58:55.043863   83900 fix.go:216] guest clock: 1733788735.012193373
	I1209 23:58:55.043870   83900 fix.go:229] Guest: 2024-12-09 23:58:55.012193373 +0000 UTC Remote: 2024-12-09 23:58:54.929689658 +0000 UTC m=+295.728685701 (delta=82.503715ms)
	I1209 23:58:55.043888   83900 fix.go:200] guest clock delta is within tolerance: 82.503715ms
	I1209 23:58:55.043892   83900 start.go:83] releasing machines lock for "embed-certs-825613", held for 20.560223511s
	I1209 23:58:55.043915   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.044198   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:55.046852   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.047246   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:55.047283   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.047446   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.047910   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.048101   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.048198   83900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:58:55.048235   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:55.048424   83900 ssh_runner.go:195] Run: cat /version.json
	I1209 23:58:55.048453   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:55.050905   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051274   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:55.051317   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051339   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051502   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:55.051721   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:55.051772   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:55.051794   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051925   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:55.051980   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:55.052091   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:55.052133   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:55.052263   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:55.052407   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:55.136384   83900 ssh_runner.go:195] Run: systemctl --version
	I1209 23:58:55.154927   83900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:58:55.297639   83900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:58:55.303733   83900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:58:55.303791   83900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:58:55.323858   83900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:58:55.323884   83900 start.go:495] detecting cgroup driver to use...
	I1209 23:58:55.323955   83900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:58:55.342390   83900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:58:55.360400   83900 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:58:55.360463   83900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:58:55.374005   83900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:58:55.387515   83900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:58:55.507866   83900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:58:55.676259   83900 docker.go:233] disabling docker service ...
	I1209 23:58:55.676338   83900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:58:55.695273   83900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:58:55.707909   83900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:58:55.824683   83900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:58:55.934080   83900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:58:55.950700   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:58:55.967756   83900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:58:55.967813   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:55.978028   83900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:58:55.978102   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:55.988960   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:55.999661   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.010253   83900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:58:56.020928   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.030944   83900 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.050251   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.062723   83900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:58:56.072435   83900 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:58:56.072501   83900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:58:56.085332   83900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:58:56.095538   83900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:58:56.214133   83900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:58:56.310023   83900 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:58:56.310107   83900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:58:56.314988   83900 start.go:563] Will wait 60s for crictl version
	I1209 23:58:56.315057   83900 ssh_runner.go:195] Run: which crictl
	I1209 23:58:56.318865   83900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:58:56.356889   83900 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:58:56.356996   83900 ssh_runner.go:195] Run: crio --version
	I1209 23:58:56.387128   83900 ssh_runner.go:195] Run: crio --version
	I1209 23:58:56.417781   83900 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:58:55.068139   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Start
	I1209 23:58:55.068329   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Ensuring networks are active...
	I1209 23:58:55.069026   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Ensuring network default is active
	I1209 23:58:55.069526   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Ensuring network mk-default-k8s-diff-port-871210 is active
	I1209 23:58:55.069970   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Getting domain xml...
	I1209 23:58:55.070725   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Creating domain...
	I1209 23:58:56.366161   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting to get IP...
	I1209 23:58:56.367215   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.367659   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.367720   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:56.367639   85338 retry.go:31] will retry after 212.733452ms: waiting for machine to come up
	I1209 23:58:56.582400   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.582945   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.582973   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:56.582895   85338 retry.go:31] will retry after 380.03081ms: waiting for machine to come up
	I1209 23:58:56.964721   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.965265   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.965296   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:56.965208   85338 retry.go:31] will retry after 429.612511ms: waiting for machine to come up
	I1209 23:58:57.396713   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.397267   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.397298   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:57.397236   85338 retry.go:31] will retry after 595.581233ms: waiting for machine to come up
	I1209 23:58:56.418967   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:56.422317   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:56.422632   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:56.422656   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:56.422925   83900 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1209 23:58:56.426992   83900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:58:56.440436   83900 kubeadm.go:883] updating cluster {Name:embed-certs-825613 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-825613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:58:56.440548   83900 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:58:56.440592   83900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:58:56.475823   83900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:58:56.475907   83900 ssh_runner.go:195] Run: which lz4
	I1209 23:58:56.479747   83900 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:58:56.483850   83900 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:58:56.483900   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 23:58:57.820979   83900 crio.go:462] duration metric: took 1.341265764s to copy over tarball
	I1209 23:58:57.821091   83900 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:58:57.994077   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.994566   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.994590   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:57.994526   85338 retry.go:31] will retry after 728.981009ms: waiting for machine to come up
	I1209 23:58:58.725707   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:58.726209   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:58.726252   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:58.726125   85338 retry.go:31] will retry after 701.836089ms: waiting for machine to come up
	I1209 23:58:59.429804   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:59.430322   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:59.430350   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:59.430266   85338 retry.go:31] will retry after 735.800538ms: waiting for machine to come up
	I1209 23:59:00.167774   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:00.168363   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:00.168397   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:00.168319   85338 retry.go:31] will retry after 1.052511845s: waiting for machine to come up
	I1209 23:59:01.222463   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:01.222960   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:01.222991   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:01.222919   85338 retry.go:31] will retry after 1.655880765s: waiting for machine to come up
	I1209 23:58:59.925304   83900 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.104178296s)
	I1209 23:58:59.925338   83900 crio.go:469] duration metric: took 2.104325305s to extract the tarball
	I1209 23:58:59.925348   83900 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:58:59.962716   83900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:00.008762   83900 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:59:00.008793   83900 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:59:00.008803   83900 kubeadm.go:934] updating node { 192.168.50.19 8443 v1.31.2 crio true true} ...
	I1209 23:59:00.008929   83900 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-825613 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-825613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:59:00.009008   83900 ssh_runner.go:195] Run: crio config
	I1209 23:59:00.050777   83900 cni.go:84] Creating CNI manager for ""
	I1209 23:59:00.050801   83900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:00.050813   83900 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:59:00.050842   83900 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.19 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-825613 NodeName:embed-certs-825613 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:59:00.051001   83900 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-825613"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.19"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.19"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:59:00.051083   83900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:59:00.062278   83900 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:59:00.062354   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:59:00.073157   83900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1209 23:59:00.088933   83900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:59:00.104358   83900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1209 23:59:00.122425   83900 ssh_runner.go:195] Run: grep 192.168.50.19	control-plane.minikube.internal$ /etc/hosts
	I1209 23:59:00.126224   83900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:00.138974   83900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:00.252489   83900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:00.269127   83900 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613 for IP: 192.168.50.19
	I1209 23:59:00.269155   83900 certs.go:194] generating shared ca certs ...
	I1209 23:59:00.269177   83900 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:00.269359   83900 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:59:00.269406   83900 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:59:00.269421   83900 certs.go:256] generating profile certs ...
	I1209 23:59:00.269551   83900 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/client.key
	I1209 23:59:00.269651   83900 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/apiserver.key.12f830e7
	I1209 23:59:00.269724   83900 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/proxy-client.key
	I1209 23:59:00.269901   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:59:00.269954   83900 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:59:00.269968   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:59:00.270007   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:59:00.270056   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:59:00.270096   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:59:00.270161   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:00.271078   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:59:00.322392   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:59:00.351665   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:59:00.378615   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:59:00.401622   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1209 23:59:00.430280   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 23:59:00.453489   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:59:00.478368   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:59:00.501404   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:59:00.524273   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:59:00.546132   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:59:00.568553   83900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:59:00.584196   83900 ssh_runner.go:195] Run: openssl version
	I1209 23:59:00.589905   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:59:00.600107   83900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:59:00.604436   83900 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:59:00.604504   83900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:59:00.609960   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:59:00.620129   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:59:00.629895   83900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:00.633993   83900 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:00.634053   83900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:00.639538   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:59:00.649781   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:59:00.659593   83900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:59:00.663789   83900 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:59:00.663832   83900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:59:00.669033   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:59:00.678881   83900 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:59:00.683006   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:59:00.688631   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:59:00.694212   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:59:00.699930   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:59:00.705400   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:59:00.710668   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:59:00.715982   83900 kubeadm.go:392] StartCluster: {Name:embed-certs-825613 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-825613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:00.716059   83900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:59:00.716094   83900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:00.758093   83900 cri.go:89] found id: ""
	I1209 23:59:00.758164   83900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:59:00.767877   83900 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:59:00.767895   83900 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:59:00.767932   83900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:59:00.777172   83900 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:59:00.778578   83900 kubeconfig.go:125] found "embed-certs-825613" server: "https://192.168.50.19:8443"
	I1209 23:59:00.780691   83900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:59:00.790459   83900 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.19
	I1209 23:59:00.790492   83900 kubeadm.go:1160] stopping kube-system containers ...
	I1209 23:59:00.790507   83900 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 23:59:00.790563   83900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:00.831048   83900 cri.go:89] found id: ""
	I1209 23:59:00.831129   83900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 23:59:00.847797   83900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:59:00.857819   83900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:59:00.857841   83900 kubeadm.go:157] found existing configuration files:
	
	I1209 23:59:00.857877   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:59:00.867108   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:59:00.867159   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:59:00.876742   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:59:00.885861   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:59:00.885942   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:59:00.895651   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:59:00.904027   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:59:00.904092   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:59:00.912956   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:59:00.921647   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:59:00.921704   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:59:00.930445   83900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:59:00.939496   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:01.049561   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.012953   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.234060   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.302650   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.378572   83900 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:59:02.378677   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:02.878755   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:03.379050   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:03.879215   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:02.880033   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:02.880532   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:02.880560   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:02.880488   85338 retry.go:31] will retry after 2.112793708s: waiting for machine to come up
	I1209 23:59:04.994713   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:04.995223   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:04.995254   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:04.995183   85338 retry.go:31] will retry after 2.420394988s: waiting for machine to come up
	I1209 23:59:07.416846   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:07.417309   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:07.417338   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:07.417281   85338 retry.go:31] will retry after 2.479420656s: waiting for machine to come up
	I1209 23:59:04.379735   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:04.406446   83900 api_server.go:72] duration metric: took 2.027872494s to wait for apiserver process to appear ...
	I1209 23:59:04.406480   83900 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:59:04.406506   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:04.407056   83900 api_server.go:269] stopped: https://192.168.50.19:8443/healthz: Get "https://192.168.50.19:8443/healthz": dial tcp 192.168.50.19:8443: connect: connection refused
	I1209 23:59:04.907381   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:06.967755   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:06.967791   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:06.967808   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:07.003261   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:07.003295   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:07.406692   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:07.411139   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:07.411168   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:07.906689   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:07.911136   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:07.911176   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:08.406697   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:08.411051   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 200:
	ok
	I1209 23:59:08.422472   83900 api_server.go:141] control plane version: v1.31.2
	I1209 23:59:08.422506   83900 api_server.go:131] duration metric: took 4.01601823s to wait for apiserver health ...
	I1209 23:59:08.422532   83900 cni.go:84] Creating CNI manager for ""
	I1209 23:59:08.422541   83900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:08.424238   83900 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 23:59:08.425539   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 23:59:08.435424   83900 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 23:59:08.451320   83900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:59:08.461244   83900 system_pods.go:59] 8 kube-system pods found
	I1209 23:59:08.461285   83900 system_pods.go:61] "coredns-7c65d6cfc9-qvtlr" [1ef5707d-5f06-46d0-809b-da79f79c49e5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 23:59:08.461296   83900 system_pods.go:61] "etcd-embed-certs-825613" [3213929f-0200-4e7d-9b69-967463bfc194] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 23:59:08.461305   83900 system_pods.go:61] "kube-apiserver-embed-certs-825613" [d64b8c55-7da1-4b8e-864e-0727676b17a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 23:59:08.461313   83900 system_pods.go:61] "kube-controller-manager-embed-certs-825613" [61372146-3fe1-4f4e-9d8e-d85cc5ac8369] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 23:59:08.461322   83900 system_pods.go:61] "kube-proxy-rn6fg" [6db02558-bfa6-4c5f-a120-aed13575b273] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 23:59:08.461329   83900 system_pods.go:61] "kube-scheduler-embed-certs-825613" [b9c75ecd-add3-4a19-86e0-08326eea0f6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 23:59:08.461346   83900 system_pods.go:61] "metrics-server-6867b74b74-hg7c5" [2a657b1b-4435-42b5-aef2-deebf7865c83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 23:59:08.461358   83900 system_pods.go:61] "storage-provisioner" [e5cabae9-bb71-4b5e-9a43-0dce4a0733a1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 23:59:08.461368   83900 system_pods.go:74] duration metric: took 10.019045ms to wait for pod list to return data ...
	I1209 23:59:08.461381   83900 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:59:08.464945   83900 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 23:59:08.464973   83900 node_conditions.go:123] node cpu capacity is 2
	I1209 23:59:08.464986   83900 node_conditions.go:105] duration metric: took 3.600013ms to run NodePressure ...
	I1209 23:59:08.465011   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:08.762283   83900 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 23:59:08.766618   83900 kubeadm.go:739] kubelet initialised
	I1209 23:59:08.766640   83900 kubeadm.go:740] duration metric: took 4.332483ms waiting for restarted kubelet to initialise ...
	I1209 23:59:08.766648   83900 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:08.771039   83900 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.775795   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.775820   83900 pod_ready.go:82] duration metric: took 4.756744ms for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.775829   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.775836   83900 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.780473   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "etcd-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.780498   83900 pod_ready.go:82] duration metric: took 4.651756ms for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.780507   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "etcd-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.780517   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.785226   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.785252   83900 pod_ready.go:82] duration metric: took 4.725948ms for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.785261   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.785268   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.855086   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.855117   83900 pod_ready.go:82] duration metric: took 69.839948ms for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.855129   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.855141   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:09.255415   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-proxy-rn6fg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.255451   83900 pod_ready.go:82] duration metric: took 400.29383ms for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:09.255461   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-proxy-rn6fg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.255467   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:09.654750   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.654775   83900 pod_ready.go:82] duration metric: took 399.301549ms for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:09.654785   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.654792   83900 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:10.054952   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:10.054979   83900 pod_ready.go:82] duration metric: took 400.177675ms for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:10.054995   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:10.055002   83900 pod_ready.go:39] duration metric: took 1.288346997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:10.055019   83900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 23:59:10.066533   83900 ops.go:34] apiserver oom_adj: -16
	I1209 23:59:10.066559   83900 kubeadm.go:597] duration metric: took 9.298658158s to restartPrimaryControlPlane
	I1209 23:59:10.066570   83900 kubeadm.go:394] duration metric: took 9.350595042s to StartCluster
	I1209 23:59:10.066590   83900 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:10.066674   83900 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:59:10.068469   83900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:10.068732   83900 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:59:10.068795   83900 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 23:59:10.068901   83900 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-825613"
	I1209 23:59:10.068925   83900 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-825613"
	W1209 23:59:10.068932   83900 addons.go:243] addon storage-provisioner should already be in state true
	I1209 23:59:10.068966   83900 config.go:182] Loaded profile config "embed-certs-825613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:59:10.068960   83900 addons.go:69] Setting default-storageclass=true in profile "embed-certs-825613"
	I1209 23:59:10.068969   83900 host.go:66] Checking if "embed-certs-825613" exists ...
	I1209 23:59:10.069011   83900 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-825613"
	I1209 23:59:10.068972   83900 addons.go:69] Setting metrics-server=true in profile "embed-certs-825613"
	I1209 23:59:10.069584   83900 addons.go:234] Setting addon metrics-server=true in "embed-certs-825613"
	W1209 23:59:10.069616   83900 addons.go:243] addon metrics-server should already be in state true
	I1209 23:59:10.069644   83900 host.go:66] Checking if "embed-certs-825613" exists ...
	I1209 23:59:10.070176   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.070220   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.070219   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.070255   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.070947   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.071024   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.071039   83900 out.go:177] * Verifying Kubernetes components...
	I1209 23:59:10.073122   83900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:10.085793   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I1209 23:59:10.086012   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I1209 23:59:10.086282   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.086412   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.086794   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.086823   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.086907   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.086930   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.087156   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38521
	I1209 23:59:10.087160   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.087251   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.087469   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.087617   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.087792   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.087828   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.088155   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.088177   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.088692   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.089323   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.089358   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.090339   83900 addons.go:234] Setting addon default-storageclass=true in "embed-certs-825613"
	W1209 23:59:10.090357   83900 addons.go:243] addon default-storageclass should already be in state true
	I1209 23:59:10.090379   83900 host.go:66] Checking if "embed-certs-825613" exists ...
	I1209 23:59:10.090609   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.090639   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.103251   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44103
	I1209 23:59:10.103791   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.104305   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.104325   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.104376   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37713
	I1209 23:59:10.104560   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
	I1209 23:59:10.104713   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.104736   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.104848   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.104854   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.105269   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.105285   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.105393   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.105414   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.105595   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.105751   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.105784   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.106203   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.106235   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.107268   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:59:10.107710   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:59:10.109781   83900 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:10.109863   83900 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 23:59:10.111198   83900 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:59:10.111210   83900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 23:59:10.111224   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:59:10.111294   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 23:59:10.111300   83900 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 23:59:10.111309   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:59:10.114610   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.114929   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.114962   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:59:10.114980   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.115159   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:59:10.115318   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:59:10.115438   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:59:10.115474   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:59:10.115516   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.115599   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:59:10.115704   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:59:10.115844   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:59:10.115944   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:59:10.116149   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:59:10.123450   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1209 23:59:10.123926   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.124406   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.124445   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.124744   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.124936   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.126437   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:59:10.126619   83900 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 23:59:10.126637   83900 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 23:59:10.126656   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:59:10.129218   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.129663   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:59:10.129695   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.129874   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:59:10.130055   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:59:10.130164   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:59:10.130296   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:59:10.272711   83900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:10.290121   83900 node_ready.go:35] waiting up to 6m0s for node "embed-certs-825613" to be "Ready" ...
	I1209 23:59:10.379383   83900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:59:10.397672   83900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 23:59:10.403543   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 23:59:10.403591   83900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 23:59:10.439890   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 23:59:10.439915   83900 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 23:59:10.482300   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:59:10.482331   83900 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 23:59:10.550923   83900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:59:11.613572   83900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.234149831s)
	I1209 23:59:11.613622   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.613634   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.613655   83900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.21594498s)
	I1209 23:59:11.613701   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.613713   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.613929   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.613976   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.613992   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.613979   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.614006   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.614018   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.614032   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.614052   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.614000   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.614085   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.614332   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.614343   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.614343   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.614361   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.614362   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.614372   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.621731   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.621749   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.622001   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.622017   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.664217   83900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113253318s)
	I1209 23:59:11.664273   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.664288   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.664633   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.664637   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.664649   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.664658   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.664676   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.664875   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.664888   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.664903   83900 addons.go:475] Verifying addon metrics-server=true in "embed-certs-825613"
	I1209 23:59:11.666814   83900 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1209 23:59:09.899886   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:09.900284   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:09.900314   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:09.900262   85338 retry.go:31] will retry after 3.641244983s: waiting for machine to come up
	I1209 23:59:11.668002   83900 addons.go:510] duration metric: took 1.599215886s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1209 23:59:12.293475   83900 node_ready.go:53] node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:14.960350   84547 start.go:364] duration metric: took 3m49.343321897s to acquireMachinesLock for "old-k8s-version-720064"
	I1209 23:59:14.960428   84547 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:59:14.960440   84547 fix.go:54] fixHost starting: 
	I1209 23:59:14.960886   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:14.960950   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:14.981976   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37057
	I1209 23:59:14.982425   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:14.982933   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:59:14.982966   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:14.983341   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:14.983587   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:14.983772   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetState
	I1209 23:59:14.985748   84547 fix.go:112] recreateIfNeeded on old-k8s-version-720064: state=Stopped err=<nil>
	I1209 23:59:14.985774   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	W1209 23:59:14.985968   84547 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:59:14.987652   84547 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-720064" ...
	I1209 23:59:14.988869   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .Start
	I1209 23:59:14.989086   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring networks are active...
	I1209 23:59:14.989817   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring network default is active
	I1209 23:59:14.990188   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring network mk-old-k8s-version-720064 is active
	I1209 23:59:14.990640   84547 main.go:141] libmachine: (old-k8s-version-720064) Getting domain xml...
	I1209 23:59:14.991392   84547 main.go:141] libmachine: (old-k8s-version-720064) Creating domain...
	I1209 23:59:13.544971   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.545381   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Found IP for machine: 192.168.72.54
	I1209 23:59:13.545408   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has current primary IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.545418   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Reserving static IP address...
	I1209 23:59:13.545913   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-871210", mac: "52:54:00:5e:5b:a3", ip: "192.168.72.54"} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.545942   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | skip adding static IP to network mk-default-k8s-diff-port-871210 - found existing host DHCP lease matching {name: "default-k8s-diff-port-871210", mac: "52:54:00:5e:5b:a3", ip: "192.168.72.54"}
	I1209 23:59:13.545957   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Reserved static IP address: 192.168.72.54
	I1209 23:59:13.545973   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for SSH to be available...
	I1209 23:59:13.545988   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Getting to WaitForSSH function...
	I1209 23:59:13.548279   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.548641   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.548700   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.548788   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Using SSH client type: external
	I1209 23:59:13.548812   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa (-rw-------)
	I1209 23:59:13.548882   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:59:13.548908   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | About to run SSH command:
	I1209 23:59:13.548923   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | exit 0
	I1209 23:59:13.687704   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | SSH cmd err, output: <nil>: 
	I1209 23:59:13.688021   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetConfigRaw
	I1209 23:59:13.688673   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:13.690853   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.691187   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.691224   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.691421   84259 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/config.json ...
	I1209 23:59:13.691646   84259 machine.go:93] provisionDockerMachine start ...
	I1209 23:59:13.691665   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:13.691901   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:13.693958   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.694230   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.694257   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.694392   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:13.694602   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.694771   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.694913   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:13.695093   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:13.695278   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:13.695288   84259 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:59:13.803602   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:59:13.803629   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetMachineName
	I1209 23:59:13.803904   84259 buildroot.go:166] provisioning hostname "default-k8s-diff-port-871210"
	I1209 23:59:13.803928   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetMachineName
	I1209 23:59:13.804106   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:13.806369   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.806769   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.806800   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.806906   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:13.807053   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.807222   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.807329   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:13.807551   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:13.807751   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:13.807765   84259 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-871210 && echo "default-k8s-diff-port-871210" | sudo tee /etc/hostname
	I1209 23:59:13.932849   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-871210
	
	I1209 23:59:13.932874   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:13.935573   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.935943   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.935972   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.936143   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:13.936388   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.936546   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.936682   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:13.936840   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:13.937016   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:13.937046   84259 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-871210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-871210/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-871210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:59:14.057493   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:59:14.057526   84259 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:59:14.057565   84259 buildroot.go:174] setting up certificates
	I1209 23:59:14.057575   84259 provision.go:84] configureAuth start
	I1209 23:59:14.057586   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetMachineName
	I1209 23:59:14.057860   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:14.060619   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.060947   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.060973   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.061170   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.063591   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.063919   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.063946   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.064133   84259 provision.go:143] copyHostCerts
	I1209 23:59:14.064180   84259 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:59:14.064189   84259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:59:14.064242   84259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:59:14.064333   84259 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:59:14.064341   84259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:59:14.064364   84259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:59:14.064416   84259 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:59:14.064424   84259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:59:14.064442   84259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:59:14.064488   84259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-871210 san=[127.0.0.1 192.168.72.54 default-k8s-diff-port-871210 localhost minikube]
	I1209 23:59:14.332090   84259 provision.go:177] copyRemoteCerts
	I1209 23:59:14.332148   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:59:14.332174   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.334856   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.335275   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.335309   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.335428   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.335647   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.335860   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.336002   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:14.422641   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:59:14.445943   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1209 23:59:14.468188   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:59:14.490678   84259 provision.go:87] duration metric: took 433.06125ms to configureAuth
	I1209 23:59:14.490710   84259 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:59:14.490883   84259 config.go:182] Loaded profile config "default-k8s-diff-port-871210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:59:14.490953   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.493529   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.493858   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.493879   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.494073   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.494276   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.494435   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.494555   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.494716   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:14.494880   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:14.494899   84259 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:59:14.720424   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:59:14.720452   84259 machine.go:96] duration metric: took 1.028790944s to provisionDockerMachine
	I1209 23:59:14.720468   84259 start.go:293] postStartSetup for "default-k8s-diff-port-871210" (driver="kvm2")
	I1209 23:59:14.720480   84259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:59:14.720496   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.720814   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:59:14.720844   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.723417   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.723817   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.723851   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.723992   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.724171   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.724328   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.724453   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:14.810295   84259 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:59:14.814400   84259 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:59:14.814424   84259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:59:14.814501   84259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:59:14.814595   84259 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:59:14.814706   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:59:14.823765   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:14.847185   84259 start.go:296] duration metric: took 126.701807ms for postStartSetup
	I1209 23:59:14.847226   84259 fix.go:56] duration metric: took 19.803160459s for fixHost
	I1209 23:59:14.847245   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.850067   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.850488   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.850516   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.850697   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.850915   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.851082   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.851218   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.851369   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:14.851558   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:14.851588   84259 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:59:14.960167   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788754.933814431
	
	I1209 23:59:14.960190   84259 fix.go:216] guest clock: 1733788754.933814431
	I1209 23:59:14.960198   84259 fix.go:229] Guest: 2024-12-09 23:59:14.933814431 +0000 UTC Remote: 2024-12-09 23:59:14.847230309 +0000 UTC m=+262.142902203 (delta=86.584122ms)
	I1209 23:59:14.960217   84259 fix.go:200] guest clock delta is within tolerance: 86.584122ms
	I1209 23:59:14.960222   84259 start.go:83] releasing machines lock for "default-k8s-diff-port-871210", held for 19.916185392s
	I1209 23:59:14.960244   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.960544   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:14.963243   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.963653   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.963693   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.963820   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.964285   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.964505   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.964612   84259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:59:14.964689   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.964749   84259 ssh_runner.go:195] Run: cat /version.json
	I1209 23:59:14.964772   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.967430   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.967811   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.967844   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.967871   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.968018   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.968194   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.968388   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.968405   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.968427   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.968563   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.968562   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:14.968706   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.968878   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.969038   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:15.072098   84259 ssh_runner.go:195] Run: systemctl --version
	I1209 23:59:15.077846   84259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:59:15.222108   84259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:59:15.230013   84259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:59:15.230097   84259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:59:15.247065   84259 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:59:15.247091   84259 start.go:495] detecting cgroup driver to use...
	I1209 23:59:15.247168   84259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:59:15.268262   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:59:15.283824   84259 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:59:15.283893   84259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:59:15.299384   84259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:59:15.318727   84259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:59:15.457157   84259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:59:15.642629   84259 docker.go:233] disabling docker service ...
	I1209 23:59:15.642718   84259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:59:15.661646   84259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:59:15.681887   84259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:59:15.838344   84259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:59:15.971028   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:59:15.984896   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:59:16.006631   84259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:59:16.006691   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.019058   84259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:59:16.019143   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.031838   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.043176   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.057386   84259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:59:16.068694   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.083326   84259 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.102288   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.113538   84259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:59:16.126450   84259 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:59:16.126516   84259 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:59:16.147057   84259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:59:16.157329   84259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:16.285726   84259 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:59:16.385931   84259 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:59:16.386014   84259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:59:16.390840   84259 start.go:563] Will wait 60s for crictl version
	I1209 23:59:16.390912   84259 ssh_runner.go:195] Run: which crictl
	I1209 23:59:16.394870   84259 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:59:16.438893   84259 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:59:16.438986   84259 ssh_runner.go:195] Run: crio --version
	I1209 23:59:16.470118   84259 ssh_runner.go:195] Run: crio --version
	I1209 23:59:16.499603   84259 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:59:16.500665   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:16.503766   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:16.504242   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:16.504329   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:16.504609   84259 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1209 23:59:16.508742   84259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:16.520816   84259 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-871210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-871210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:59:16.520944   84259 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:59:16.520988   84259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:16.563220   84259 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:59:16.563315   84259 ssh_runner.go:195] Run: which lz4
	I1209 23:59:16.568962   84259 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:59:16.573715   84259 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:59:16.573756   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 23:59:14.294886   83900 node_ready.go:53] node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:16.796597   83900 node_ready.go:53] node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:17.795347   83900 node_ready.go:49] node "embed-certs-825613" has status "Ready":"True"
	I1209 23:59:17.795381   83900 node_ready.go:38] duration metric: took 7.505214814s for node "embed-certs-825613" to be "Ready" ...
	I1209 23:59:17.795394   83900 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:17.801650   83900 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:17.808087   83900 pod_ready.go:93] pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:17.808113   83900 pod_ready.go:82] duration metric: took 6.437717ms for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:17.808127   83900 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:16.350842   84547 main.go:141] libmachine: (old-k8s-version-720064) Waiting to get IP...
	I1209 23:59:16.351883   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.352326   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.352393   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.352304   85519 retry.go:31] will retry after 268.149849ms: waiting for machine to come up
	I1209 23:59:16.621742   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.622116   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.622145   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.622066   85519 retry.go:31] will retry after 365.051996ms: waiting for machine to come up
	I1209 23:59:16.988590   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.989124   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.989154   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.989089   85519 retry.go:31] will retry after 441.697933ms: waiting for machine to come up
	I1209 23:59:17.432962   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:17.433453   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:17.433482   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:17.433356   85519 retry.go:31] will retry after 503.173846ms: waiting for machine to come up
	I1209 23:59:17.938107   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:17.938576   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:17.938610   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:17.938533   85519 retry.go:31] will retry after 476.993358ms: waiting for machine to come up
	I1209 23:59:18.417462   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:18.418037   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:18.418064   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:18.417989   85519 retry.go:31] will retry after 732.449849ms: waiting for machine to come up
	I1209 23:59:19.152120   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:19.152680   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:19.152708   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:19.152624   85519 retry.go:31] will retry after 764.17794ms: waiting for machine to come up
	I1209 23:59:19.918630   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:19.919113   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:19.919141   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:19.919066   85519 retry.go:31] will retry after 1.072352346s: waiting for machine to come up
	I1209 23:59:17.907764   84259 crio.go:462] duration metric: took 1.338836821s to copy over tarball
	I1209 23:59:17.907846   84259 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:59:20.133171   84259 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.22529043s)
	I1209 23:59:20.133214   84259 crio.go:469] duration metric: took 2.225420822s to extract the tarball
	I1209 23:59:20.133224   84259 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:59:20.169786   84259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:20.211990   84259 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:59:20.212020   84259 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:59:20.212030   84259 kubeadm.go:934] updating node { 192.168.72.54 8444 v1.31.2 crio true true} ...
	I1209 23:59:20.212166   84259 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-871210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-871210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:59:20.212247   84259 ssh_runner.go:195] Run: crio config
	I1209 23:59:20.255141   84259 cni.go:84] Creating CNI manager for ""
	I1209 23:59:20.255169   84259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:20.255182   84259 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:59:20.255215   84259 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.54 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-871210 NodeName:default-k8s-diff-port-871210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:59:20.255338   84259 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.54
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-871210"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.54"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.54"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:59:20.255407   84259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:59:20.265160   84259 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:59:20.265244   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:59:20.275799   84259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1209 23:59:20.292089   84259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:59:20.308054   84259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1209 23:59:20.327464   84259 ssh_runner.go:195] Run: grep 192.168.72.54	control-plane.minikube.internal$ /etc/hosts
	I1209 23:59:20.331101   84259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:20.343232   84259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:20.473225   84259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:20.492069   84259 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210 for IP: 192.168.72.54
	I1209 23:59:20.492098   84259 certs.go:194] generating shared ca certs ...
	I1209 23:59:20.492119   84259 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:20.492311   84259 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:59:20.492363   84259 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:59:20.492378   84259 certs.go:256] generating profile certs ...
	I1209 23:59:20.492499   84259 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/client.key
	I1209 23:59:20.492605   84259 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/apiserver.key.9cc284f2
	I1209 23:59:20.492663   84259 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/proxy-client.key
	I1209 23:59:20.492831   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:59:20.492872   84259 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:59:20.492886   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:59:20.492918   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:59:20.492951   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:59:20.492997   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:59:20.493071   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:20.494023   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:59:20.543769   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:59:20.583697   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:59:20.615170   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:59:20.638784   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1209 23:59:20.673708   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 23:59:20.696690   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:59:20.719682   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:59:20.744292   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:59:20.767643   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:59:20.790927   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:59:20.816746   84259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:59:20.834150   84259 ssh_runner.go:195] Run: openssl version
	I1209 23:59:20.840153   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:59:20.851006   84259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:59:20.855430   84259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:59:20.855492   84259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:59:20.860866   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:59:20.871983   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:59:20.882642   84259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:20.886978   84259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:20.887050   84259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:20.892472   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:59:20.902959   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:59:20.913774   84259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:59:20.918344   84259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:59:20.918394   84259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:59:20.923981   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:59:20.934931   84259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:59:20.939662   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:59:20.945633   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:59:20.951650   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:59:20.957628   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:59:20.963600   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:59:20.969399   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:59:20.974992   84259 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-871210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-871210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:20.975088   84259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:59:20.975143   84259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:21.012553   84259 cri.go:89] found id: ""
	I1209 23:59:21.012647   84259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:59:21.023728   84259 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:59:21.023758   84259 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:59:21.023808   84259 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:59:21.033788   84259 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:59:21.034806   84259 kubeconfig.go:125] found "default-k8s-diff-port-871210" server: "https://192.168.72.54:8444"
	I1209 23:59:21.036852   84259 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:59:21.046251   84259 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.54
	I1209 23:59:21.046280   84259 kubeadm.go:1160] stopping kube-system containers ...
	I1209 23:59:21.046292   84259 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 23:59:21.046354   84259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:21.084915   84259 cri.go:89] found id: ""
	I1209 23:59:21.085000   84259 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 23:59:21.100592   84259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:59:21.111180   84259 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:59:21.111204   84259 kubeadm.go:157] found existing configuration files:
	
	I1209 23:59:21.111254   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1209 23:59:21.120027   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:59:21.120087   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:59:21.129165   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1209 23:59:21.138254   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:59:21.138315   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:59:21.147276   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1209 23:59:21.155635   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:59:21.155711   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:59:21.164794   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1209 23:59:21.173421   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:59:21.173477   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:59:21.182615   84259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:59:21.191795   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:21.289295   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.415644   84259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.126305379s)
	I1209 23:59:22.415695   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.616211   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.672571   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.731282   84259 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:59:22.731363   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:19.816966   83900 pod_ready.go:103] pod "etcd-embed-certs-825613" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:21.814625   83900 pod_ready.go:93] pod "etcd-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.814650   83900 pod_ready.go:82] duration metric: took 4.006513871s for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.814662   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.819611   83900 pod_ready.go:93] pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.819634   83900 pod_ready.go:82] duration metric: took 4.964081ms for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.819647   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.823994   83900 pod_ready.go:93] pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.824016   83900 pod_ready.go:82] duration metric: took 4.360902ms for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.824028   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.829102   83900 pod_ready.go:93] pod "kube-proxy-rn6fg" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.829128   83900 pod_ready.go:82] duration metric: took 5.090595ms for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.829161   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:23.983548   83900 pod_ready.go:93] pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:23.983603   83900 pod_ready.go:82] duration metric: took 2.154427639s for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:23.983620   83900 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:20.992747   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:20.993138   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:20.993196   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:20.993111   85519 retry.go:31] will retry after 1.479639772s: waiting for machine to come up
	I1209 23:59:22.474889   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:22.475414   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:22.475445   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:22.475364   85519 retry.go:31] will retry after 2.248463233s: waiting for machine to come up
	I1209 23:59:24.725307   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:24.725765   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:24.725796   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:24.725706   85519 retry.go:31] will retry after 2.803512929s: waiting for machine to come up
	I1209 23:59:23.231575   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:23.731704   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:24.231740   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:24.265107   84259 api_server.go:72] duration metric: took 1.533824471s to wait for apiserver process to appear ...
	I1209 23:59:24.265144   84259 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:59:24.265173   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:24.265664   84259 api_server.go:269] stopped: https://192.168.72.54:8444/healthz: Get "https://192.168.72.54:8444/healthz": dial tcp 192.168.72.54:8444: connect: connection refused
	I1209 23:59:24.765571   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.251990   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:27.252018   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:27.252033   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.284733   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:27.284769   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:27.284781   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.313967   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:27.313998   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:25.990720   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:28.490332   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:27.530567   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:27.531086   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:27.531120   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:27.531022   85519 retry.go:31] will retry after 2.800227475s: waiting for machine to come up
	I1209 23:59:30.333201   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:30.333660   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:30.333684   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:30.333621   85519 retry.go:31] will retry after 3.041733113s: waiting for machine to come up
	I1209 23:59:27.765806   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.773528   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:27.773564   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:28.266012   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:28.275940   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:28.275965   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:28.765502   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:28.770695   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:28.770725   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:29.265309   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:29.270844   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:29.270883   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:29.765323   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:29.769949   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:29.769974   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:30.265981   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:30.270284   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:30.270313   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:30.765915   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:30.771542   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I1209 23:59:30.781357   84259 api_server.go:141] control plane version: v1.31.2
	I1209 23:59:30.781390   84259 api_server.go:131] duration metric: took 6.516238077s to wait for apiserver health ...
	I1209 23:59:30.781400   84259 cni.go:84] Creating CNI manager for ""
	I1209 23:59:30.781409   84259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:30.783438   84259 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 23:59:30.784794   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 23:59:30.794916   84259 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 23:59:30.812099   84259 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:59:30.822915   84259 system_pods.go:59] 8 kube-system pods found
	I1209 23:59:30.822967   84259 system_pods.go:61] "coredns-7c65d6cfc9-wclgl" [22b2e24e-5a03-4d4f-a071-68e414aaf6cf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 23:59:30.822980   84259 system_pods.go:61] "etcd-default-k8s-diff-port-871210" [2058a33f-dd4b-42bc-94d9-4cd130b9389c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 23:59:30.822990   84259 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-871210" [26ffd01d-48b5-4e38-a91b-bf75dd75e3c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 23:59:30.823000   84259 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-871210" [cea3a810-c7e9-48d6-9fd7-1f6258c45387] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 23:59:30.823006   84259 system_pods.go:61] "kube-proxy-d7sxm" [e28ccb5f-a282-4371-ae1a-fca52dd58616] Running
	I1209 23:59:30.823021   84259 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-871210" [790619f6-29a2-4bbc-89ba-a7b93653459b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 23:59:30.823033   84259 system_pods.go:61] "metrics-server-6867b74b74-lgzdz" [2c251249-58b6-44c4-929a-0f5c963d83b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 23:59:30.823039   84259 system_pods.go:61] "storage-provisioner" [63078ee5-81b8-4548-88bf-2b89857ea01a] Running
	I1209 23:59:30.823050   84259 system_pods.go:74] duration metric: took 10.914419ms to wait for pod list to return data ...
	I1209 23:59:30.823063   84259 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:59:30.828012   84259 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 23:59:30.828062   84259 node_conditions.go:123] node cpu capacity is 2
	I1209 23:59:30.828076   84259 node_conditions.go:105] duration metric: took 5.004481ms to run NodePressure ...
	I1209 23:59:30.828097   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:31.092839   84259 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 23:59:31.097588   84259 kubeadm.go:739] kubelet initialised
	I1209 23:59:31.097613   84259 kubeadm.go:740] duration metric: took 4.744115ms waiting for restarted kubelet to initialise ...
	I1209 23:59:31.097623   84259 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:31.104318   84259 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:30.991511   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:33.490390   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:34.824186   83859 start.go:364] duration metric: took 55.338760885s to acquireMachinesLock for "no-preload-048296"
	I1209 23:59:34.824245   83859 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:59:34.824272   83859 fix.go:54] fixHost starting: 
	I1209 23:59:34.824660   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:34.824713   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:34.842421   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35591
	I1209 23:59:34.842851   83859 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:34.843297   83859 main.go:141] libmachine: Using API Version  1
	I1209 23:59:34.843319   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:34.843701   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:34.843916   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:34.844066   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1209 23:59:34.845768   83859 fix.go:112] recreateIfNeeded on no-preload-048296: state=Stopped err=<nil>
	I1209 23:59:34.845792   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	W1209 23:59:34.845958   83859 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:59:34.847617   83859 out.go:177] * Restarting existing kvm2 VM for "no-preload-048296" ...
	I1209 23:59:33.378848   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.379297   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has current primary IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.379326   84547 main.go:141] libmachine: (old-k8s-version-720064) Found IP for machine: 192.168.39.188
	I1209 23:59:33.379351   84547 main.go:141] libmachine: (old-k8s-version-720064) Reserving static IP address...
	I1209 23:59:33.379701   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "old-k8s-version-720064", mac: "52:54:00:a1:91:00", ip: "192.168.39.188"} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.379722   84547 main.go:141] libmachine: (old-k8s-version-720064) Reserved static IP address: 192.168.39.188
	I1209 23:59:33.379743   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | skip adding static IP to network mk-old-k8s-version-720064 - found existing host DHCP lease matching {name: "old-k8s-version-720064", mac: "52:54:00:a1:91:00", ip: "192.168.39.188"}
	I1209 23:59:33.379759   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Getting to WaitForSSH function...
	I1209 23:59:33.379776   84547 main.go:141] libmachine: (old-k8s-version-720064) Waiting for SSH to be available...
	I1209 23:59:33.381697   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.381990   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.382027   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.382137   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Using SSH client type: external
	I1209 23:59:33.382163   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa (-rw-------)
	I1209 23:59:33.382193   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:59:33.382212   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | About to run SSH command:
	I1209 23:59:33.382229   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | exit 0
	I1209 23:59:33.507653   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | SSH cmd err, output: <nil>: 
	I1209 23:59:33.507957   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetConfigRaw
	I1209 23:59:33.508555   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:33.511020   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.511399   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.511431   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.511695   84547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/config.json ...
	I1209 23:59:33.511932   84547 machine.go:93] provisionDockerMachine start ...
	I1209 23:59:33.511958   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:33.512144   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.514383   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.514722   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.514743   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.514876   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.515048   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.515187   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.515349   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.515596   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.515835   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.515850   84547 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:59:33.623370   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:59:33.623407   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.623670   84547 buildroot.go:166] provisioning hostname "old-k8s-version-720064"
	I1209 23:59:33.623708   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.623909   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.626370   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.626749   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.626781   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.626943   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.627172   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.627348   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.627490   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.627666   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.627868   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.627884   84547 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-720064 && echo "old-k8s-version-720064" | sudo tee /etc/hostname
	I1209 23:59:33.749338   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-720064
	
	I1209 23:59:33.749370   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.752401   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.752774   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.752807   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.753000   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.753180   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.753372   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.753531   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.753674   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.753837   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.753853   84547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-720064' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-720064/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-720064' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:59:33.872512   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:59:33.872548   84547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:59:33.872575   84547 buildroot.go:174] setting up certificates
	I1209 23:59:33.872585   84547 provision.go:84] configureAuth start
	I1209 23:59:33.872597   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.872906   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:33.875719   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.876012   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.876051   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.876210   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.878539   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.878857   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.878894   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.879025   84547 provision.go:143] copyHostCerts
	I1209 23:59:33.879087   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:59:33.879100   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:59:33.879166   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:59:33.879254   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:59:33.879262   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:59:33.879283   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:59:33.879338   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:59:33.879346   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:59:33.879365   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:59:33.879414   84547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-720064 san=[127.0.0.1 192.168.39.188 localhost minikube old-k8s-version-720064]
	I1209 23:59:34.211417   84547 provision.go:177] copyRemoteCerts
	I1209 23:59:34.211475   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:59:34.211504   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.214285   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.214649   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.214686   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.214789   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.215006   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.215180   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.215345   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.301399   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:59:34.326216   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:59:34.349136   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 23:59:34.371597   84547 provision.go:87] duration metric: took 498.999263ms to configureAuth
	I1209 23:59:34.371633   84547 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:59:34.371832   84547 config.go:182] Loaded profile config "old-k8s-version-720064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 23:59:34.371902   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.374649   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.374985   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.375031   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.375161   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.375368   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.375541   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.375735   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.375897   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:34.376128   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:34.376152   84547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:59:34.588357   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:59:34.588391   84547 machine.go:96] duration metric: took 1.076442399s to provisionDockerMachine
	I1209 23:59:34.588402   84547 start.go:293] postStartSetup for "old-k8s-version-720064" (driver="kvm2")
	I1209 23:59:34.588412   84547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:59:34.588443   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.588749   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:59:34.588772   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.591413   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.591805   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.591836   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.592001   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.592176   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.592325   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.592431   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.673445   84547 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:59:34.677394   84547 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:59:34.677421   84547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:59:34.677498   84547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:59:34.677600   84547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:59:34.677716   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:59:34.686261   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:34.709412   84547 start.go:296] duration metric: took 120.993275ms for postStartSetup
	I1209 23:59:34.709467   84547 fix.go:56] duration metric: took 19.749026723s for fixHost
	I1209 23:59:34.709495   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.712458   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.712762   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.712796   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.712930   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.713158   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.713335   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.713506   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.713690   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:34.713852   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:34.713862   84547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:59:34.823993   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788774.778513264
	
	I1209 23:59:34.824022   84547 fix.go:216] guest clock: 1733788774.778513264
	I1209 23:59:34.824034   84547 fix.go:229] Guest: 2024-12-09 23:59:34.778513264 +0000 UTC Remote: 2024-12-09 23:59:34.709473288 +0000 UTC m=+249.236536627 (delta=69.039976ms)
	I1209 23:59:34.824067   84547 fix.go:200] guest clock delta is within tolerance: 69.039976ms
	I1209 23:59:34.824075   84547 start.go:83] releasing machines lock for "old-k8s-version-720064", held for 19.863672525s
	I1209 23:59:34.824110   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.824391   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:34.827165   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.827596   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.827629   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.827894   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828467   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828690   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828802   84547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:59:34.828865   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.828888   84547 ssh_runner.go:195] Run: cat /version.json
	I1209 23:59:34.828917   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.831978   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832052   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832514   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.832548   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.832572   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832748   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832751   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.832899   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.832982   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.832986   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.833198   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.833217   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.833370   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.833374   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.949044   84547 ssh_runner.go:195] Run: systemctl --version
	I1209 23:59:34.954999   84547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:59:35.100096   84547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:59:35.105764   84547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:59:35.105852   84547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:59:35.126893   84547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:59:35.126915   84547 start.go:495] detecting cgroup driver to use...
	I1209 23:59:35.126968   84547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:59:35.143236   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:59:35.157019   84547 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:59:35.157098   84547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:59:35.170608   84547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:59:35.184816   84547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:59:35.310043   84547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:59:35.486472   84547 docker.go:233] disabling docker service ...
	I1209 23:59:35.486653   84547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:59:35.501938   84547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:59:35.516997   84547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:59:35.667304   84547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:59:35.821316   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:59:35.836423   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:59:35.854633   84547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1209 23:59:35.854719   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.865592   84547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:59:35.865677   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.877247   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.889007   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.900196   84547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:59:35.912403   84547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:59:35.922919   84547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:59:35.922987   84547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:59:35.939439   84547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:59:35.949715   84547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:36.100115   84547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:59:36.207129   84547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:59:36.207206   84547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:59:36.213672   84547 start.go:563] Will wait 60s for crictl version
	I1209 23:59:36.213758   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:36.218204   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:59:36.255262   84547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:59:36.255367   84547 ssh_runner.go:195] Run: crio --version
	I1209 23:59:36.286277   84547 ssh_runner.go:195] Run: crio --version
	I1209 23:59:36.317334   84547 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1209 23:59:34.848733   83859 main.go:141] libmachine: (no-preload-048296) Calling .Start
	I1209 23:59:34.848943   83859 main.go:141] libmachine: (no-preload-048296) Ensuring networks are active...
	I1209 23:59:34.849641   83859 main.go:141] libmachine: (no-preload-048296) Ensuring network default is active
	I1209 23:59:34.850039   83859 main.go:141] libmachine: (no-preload-048296) Ensuring network mk-no-preload-048296 is active
	I1209 23:59:34.850540   83859 main.go:141] libmachine: (no-preload-048296) Getting domain xml...
	I1209 23:59:34.851224   83859 main.go:141] libmachine: (no-preload-048296) Creating domain...
	I1209 23:59:36.270761   83859 main.go:141] libmachine: (no-preload-048296) Waiting to get IP...
	I1209 23:59:36.271970   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:36.272578   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:36.272645   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:36.272536   85657 retry.go:31] will retry after 253.181092ms: waiting for machine to come up
	I1209 23:59:36.527128   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:36.527735   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:36.527762   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:36.527683   85657 retry.go:31] will retry after 297.608817ms: waiting for machine to come up
	I1209 23:59:36.827372   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:36.828819   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:36.828849   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:36.828763   85657 retry.go:31] will retry after 374.112777ms: waiting for machine to come up
	I1209 23:59:33.112105   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:35.611353   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:36.115472   84259 pod_ready.go:93] pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:36.115504   84259 pod_ready.go:82] duration metric: took 5.011153637s for pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:36.115521   84259 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:35.492415   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:37.992287   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:36.318651   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:36.321927   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:36.322336   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:36.322366   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:36.322619   84547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 23:59:36.327938   84547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:36.344152   84547 kubeadm.go:883] updating cluster {Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:59:36.344279   84547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:59:36.344328   84547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:36.391261   84547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 23:59:36.391348   84547 ssh_runner.go:195] Run: which lz4
	I1209 23:59:36.395391   84547 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:59:36.399585   84547 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:59:36.399622   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1209 23:59:37.969251   84547 crio.go:462] duration metric: took 1.573891158s to copy over tarball
	I1209 23:59:37.969362   84547 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:59:37.204687   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:37.205458   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:37.205479   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:37.205372   85657 retry.go:31] will retry after 420.490569ms: waiting for machine to come up
	I1209 23:59:37.627167   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:37.629813   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:37.629839   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:37.629687   85657 retry.go:31] will retry after 561.795207ms: waiting for machine to come up
	I1209 23:59:38.193652   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:38.194178   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:38.194200   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:38.194119   85657 retry.go:31] will retry after 925.018936ms: waiting for machine to come up
	I1209 23:59:39.121476   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:39.122014   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:39.122046   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:39.121958   85657 retry.go:31] will retry after 984.547761ms: waiting for machine to come up
	I1209 23:59:40.108478   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:40.108947   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:40.108981   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:40.108925   85657 retry.go:31] will retry after 1.261498912s: waiting for machine to come up
	I1209 23:59:41.372403   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:41.372901   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:41.372922   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:41.372884   85657 retry.go:31] will retry after 1.498283156s: waiting for machine to come up
	I1209 23:59:38.122964   84259 pod_ready.go:93] pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:38.122990   84259 pod_ready.go:82] duration metric: took 2.007459606s for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:38.123004   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:40.133257   84259 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:40.631911   84259 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:40.631940   84259 pod_ready.go:82] duration metric: took 2.508926956s for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:40.631957   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:40.492044   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:42.991179   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:41.050494   84547 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081073233s)
	I1209 23:59:41.050524   84547 crio.go:469] duration metric: took 3.081237568s to extract the tarball
	I1209 23:59:41.050533   84547 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:59:41.092694   84547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:41.125264   84547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 23:59:41.125289   84547 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 23:59:41.125360   84547 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.125378   84547 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.125388   84547 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.125360   84547 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.125436   84547 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1209 23:59:41.125466   84547 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.125497   84547 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.125515   84547 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.127285   84547 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.127325   84547 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1209 23:59:41.127376   84547 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.127405   84547 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.127421   84547 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.127293   84547 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.127293   84547 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.127300   84547 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.302277   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.314265   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1209 23:59:41.323683   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.326327   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.345042   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.345550   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.362324   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.367612   84547 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1209 23:59:41.367663   84547 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.367706   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.406644   84547 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1209 23:59:41.406691   84547 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1209 23:59:41.406783   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.450490   84547 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1209 23:59:41.450550   84547 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.450601   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.477332   84547 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1209 23:59:41.477389   84547 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.477438   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.499714   84547 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1209 23:59:41.499757   84547 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.499783   84547 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1209 23:59:41.499806   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.499818   84547 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.499861   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.505128   84547 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1209 23:59:41.505179   84547 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.505205   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.505249   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.505212   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.505306   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.505342   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.507860   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.507923   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.643108   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.644251   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.644273   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.644358   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.644400   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.650732   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.650817   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.777981   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.789388   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.789435   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.789491   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.789561   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.789592   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.802784   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.912370   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.914494   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1209 23:59:41.944460   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1209 23:59:41.944564   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1209 23:59:41.944569   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.944660   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1209 23:59:41.944662   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1209 23:59:41.944726   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1209 23:59:42.104003   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1209 23:59:42.104074   84547 cache_images.go:92] duration metric: took 978.7734ms to LoadCachedImages
	W1209 23:59:42.104176   84547 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1209 23:59:42.104198   84547 kubeadm.go:934] updating node { 192.168.39.188 8443 v1.20.0 crio true true} ...
	I1209 23:59:42.104326   84547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-720064 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.188
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:59:42.104431   84547 ssh_runner.go:195] Run: crio config
	I1209 23:59:42.154016   84547 cni.go:84] Creating CNI manager for ""
	I1209 23:59:42.154041   84547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:42.154050   84547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:59:42.154066   84547 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.188 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-720064 NodeName:old-k8s-version-720064 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.188"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.188 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 23:59:42.154226   84547 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.188
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-720064"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.188
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.188"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:59:42.154292   84547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 23:59:42.164866   84547 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:59:42.164943   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:59:42.175137   84547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1209 23:59:42.192176   84547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:59:42.210695   84547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1209 23:59:42.230364   84547 ssh_runner.go:195] Run: grep 192.168.39.188	control-plane.minikube.internal$ /etc/hosts
	I1209 23:59:42.234239   84547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.188	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:42.246747   84547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:42.379643   84547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:42.396626   84547 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064 for IP: 192.168.39.188
	I1209 23:59:42.396713   84547 certs.go:194] generating shared ca certs ...
	I1209 23:59:42.396746   84547 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:42.396942   84547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:59:42.397007   84547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:59:42.397023   84547 certs.go:256] generating profile certs ...
	I1209 23:59:42.397191   84547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/client.key
	I1209 23:59:42.397270   84547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key.abcbe3fa
	I1209 23:59:42.397330   84547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.key
	I1209 23:59:42.397516   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:59:42.397564   84547 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:59:42.397580   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:59:42.397623   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:59:42.397662   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:59:42.397701   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:59:42.397768   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:42.398726   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:59:42.450053   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:59:42.475685   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:59:42.514396   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:59:42.562383   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 23:59:42.589690   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 23:59:42.614149   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:59:42.647112   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:59:42.671117   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:59:42.694303   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:59:42.722563   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:59:42.753478   84547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:59:42.773673   84547 ssh_runner.go:195] Run: openssl version
	I1209 23:59:42.779930   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:59:42.791531   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.796422   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.796536   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.802386   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:59:42.817058   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:59:42.828570   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.833292   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.833357   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.839679   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:59:42.850729   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:59:42.861699   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.866417   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.866479   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.873602   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:59:42.884398   84547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:59:42.889137   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:59:42.896108   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:59:42.902584   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:59:42.908706   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:59:42.914313   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:59:42.919943   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:59:42.925417   84547 kubeadm.go:392] StartCluster: {Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:42.925546   84547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:59:42.925602   84547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:42.962948   84547 cri.go:89] found id: ""
	I1209 23:59:42.963030   84547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:59:42.973746   84547 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:59:42.973768   84547 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:59:42.973819   84547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:59:42.983593   84547 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:59:42.984528   84547 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-720064" does not appear in /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:59:42.985106   84547 kubeconfig.go:62] /home/jenkins/minikube-integration/19888-18950/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-720064" cluster setting kubeconfig missing "old-k8s-version-720064" context setting]
	I1209 23:59:42.985981   84547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:42.996842   84547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:59:43.007858   84547 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.188
	I1209 23:59:43.007901   84547 kubeadm.go:1160] stopping kube-system containers ...
	I1209 23:59:43.007916   84547 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 23:59:43.007982   84547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:43.046416   84547 cri.go:89] found id: ""
	I1209 23:59:43.046495   84547 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 23:59:43.063226   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:59:43.073919   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:59:43.073946   84547 kubeadm.go:157] found existing configuration files:
	
	I1209 23:59:43.073991   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:59:43.084217   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:59:43.084292   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:59:43.094196   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:59:43.104098   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:59:43.104178   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:59:43.115056   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:59:43.124696   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:59:43.124761   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:59:43.135180   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:59:43.146325   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:59:43.146392   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:59:43.156017   84547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:59:43.166584   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:43.376941   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.198180   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.431002   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.549010   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.644736   84547 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:59:44.644827   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:45.145713   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:42.872724   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:42.873229   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:42.873260   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:42.873178   85657 retry.go:31] will retry after 1.469240763s: waiting for machine to come up
	I1209 23:59:44.344825   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:44.345339   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:44.345374   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:44.345317   85657 retry.go:31] will retry after 1.85673524s: waiting for machine to come up
	I1209 23:59:46.203693   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:46.204128   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:46.204152   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:46.204065   85657 retry.go:31] will retry after 3.5180616s: waiting for machine to come up
	I1209 23:59:43.003893   84259 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:43.734778   84259 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:43.734806   84259 pod_ready.go:82] duration metric: took 3.102839103s for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:43.734821   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-d7sxm" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:43.743393   84259 pod_ready.go:93] pod "kube-proxy-d7sxm" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:43.743421   84259 pod_ready.go:82] duration metric: took 8.592191ms for pod "kube-proxy-d7sxm" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:43.743435   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:44.250028   84259 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:44.250051   84259 pod_ready.go:82] duration metric: took 506.607784ms for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:44.250063   84259 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:46.256096   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:45.494189   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:47.993074   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:45.645297   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:46.145300   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:46.644880   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:47.144955   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:47.645081   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:48.145132   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:48.645278   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.144932   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.645874   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:50.145061   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.725528   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:49.726040   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:49.726069   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:49.725982   85657 retry.go:31] will retry after 3.98915487s: waiting for machine to come up
	I1209 23:59:48.256397   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:50.757098   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:50.491110   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:52.989722   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:50.645171   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:51.144908   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:51.645198   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:52.145700   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:52.645665   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.145782   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.645678   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:54.145521   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:54.645825   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:55.145578   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.718634   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.719141   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has current primary IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.719163   83859 main.go:141] libmachine: (no-preload-048296) Found IP for machine: 192.168.61.182
	I1209 23:59:53.719173   83859 main.go:141] libmachine: (no-preload-048296) Reserving static IP address...
	I1209 23:59:53.719594   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "no-preload-048296", mac: "52:54:00:c6:cf:c7", ip: "192.168.61.182"} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.719621   83859 main.go:141] libmachine: (no-preload-048296) DBG | skip adding static IP to network mk-no-preload-048296 - found existing host DHCP lease matching {name: "no-preload-048296", mac: "52:54:00:c6:cf:c7", ip: "192.168.61.182"}
	I1209 23:59:53.719632   83859 main.go:141] libmachine: (no-preload-048296) Reserved static IP address: 192.168.61.182
	I1209 23:59:53.719641   83859 main.go:141] libmachine: (no-preload-048296) Waiting for SSH to be available...
	I1209 23:59:53.719671   83859 main.go:141] libmachine: (no-preload-048296) DBG | Getting to WaitForSSH function...
	I1209 23:59:53.722015   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.722309   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.722342   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.722445   83859 main.go:141] libmachine: (no-preload-048296) DBG | Using SSH client type: external
	I1209 23:59:53.722471   83859 main.go:141] libmachine: (no-preload-048296) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa (-rw-------)
	I1209 23:59:53.722504   83859 main.go:141] libmachine: (no-preload-048296) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:59:53.722517   83859 main.go:141] libmachine: (no-preload-048296) DBG | About to run SSH command:
	I1209 23:59:53.722529   83859 main.go:141] libmachine: (no-preload-048296) DBG | exit 0
	I1209 23:59:53.843261   83859 main.go:141] libmachine: (no-preload-048296) DBG | SSH cmd err, output: <nil>: 
	I1209 23:59:53.843673   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetConfigRaw
	I1209 23:59:53.844367   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:53.846889   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.847170   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.847203   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.847433   83859 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/config.json ...
	I1209 23:59:53.847656   83859 machine.go:93] provisionDockerMachine start ...
	I1209 23:59:53.847675   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:53.847834   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:53.849867   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.850179   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.850213   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.850372   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:53.850548   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.850702   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.850878   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:53.851058   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:53.851288   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:53.851303   83859 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:59:53.947889   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:59:53.947916   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:59:53.948220   83859 buildroot.go:166] provisioning hostname "no-preload-048296"
	I1209 23:59:53.948259   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:59:53.948496   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:53.951070   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.951479   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.951509   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.951729   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:53.951919   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.952120   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.952280   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:53.952456   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:53.952650   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:53.952672   83859 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-048296 && echo "no-preload-048296" | sudo tee /etc/hostname
	I1209 23:59:54.065706   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-048296
	
	I1209 23:59:54.065738   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.068629   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.068942   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.068975   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.069241   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.069493   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.069731   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.069939   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.070156   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:54.070325   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:54.070346   83859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-048296' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-048296/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-048296' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:59:54.180424   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:59:54.180460   83859 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:59:54.180479   83859 buildroot.go:174] setting up certificates
	I1209 23:59:54.180490   83859 provision.go:84] configureAuth start
	I1209 23:59:54.180499   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:59:54.180802   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:54.183349   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.183703   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.183732   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.183852   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.186076   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.186341   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.186372   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.186498   83859 provision.go:143] copyHostCerts
	I1209 23:59:54.186554   83859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:59:54.186564   83859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:59:54.186627   83859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:59:54.186715   83859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:59:54.186723   83859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:59:54.186744   83859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:59:54.186805   83859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:59:54.186814   83859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:59:54.186831   83859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:59:54.186881   83859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.no-preload-048296 san=[127.0.0.1 192.168.61.182 localhost minikube no-preload-048296]
	I1209 23:59:54.291230   83859 provision.go:177] copyRemoteCerts
	I1209 23:59:54.291296   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:59:54.291319   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.293968   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.294257   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.294283   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.294451   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.294654   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.294814   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.294963   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.377572   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:59:54.401222   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 23:59:54.425323   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 23:59:54.448358   83859 provision.go:87] duration metric: took 267.838318ms to configureAuth
	I1209 23:59:54.448387   83859 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:59:54.448563   83859 config.go:182] Loaded profile config "no-preload-048296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:59:54.448629   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.451082   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.451422   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.451450   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.451678   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.451906   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.452088   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.452222   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.452374   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:54.452550   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:54.452567   83859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:59:54.665629   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:59:54.665663   83859 machine.go:96] duration metric: took 817.991465ms to provisionDockerMachine
	I1209 23:59:54.665677   83859 start.go:293] postStartSetup for "no-preload-048296" (driver="kvm2")
	I1209 23:59:54.665690   83859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:59:54.665712   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.666016   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:59:54.666043   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.668993   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.669501   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.669532   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.669666   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.669859   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.670004   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.670160   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.752122   83859 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:59:54.756546   83859 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:59:54.756569   83859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:59:54.756640   83859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:59:54.756706   83859 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:59:54.756797   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:59:54.766465   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:54.792214   83859 start.go:296] duration metric: took 126.521539ms for postStartSetup
	I1209 23:59:54.792263   83859 fix.go:56] duration metric: took 19.967992145s for fixHost
	I1209 23:59:54.792284   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.794921   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.795304   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.795334   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.795549   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.795807   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.795986   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.796154   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.796333   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:54.796485   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:54.796495   83859 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:59:54.900382   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788794.870799171
	
	I1209 23:59:54.900413   83859 fix.go:216] guest clock: 1733788794.870799171
	I1209 23:59:54.900423   83859 fix.go:229] Guest: 2024-12-09 23:59:54.870799171 +0000 UTC Remote: 2024-12-09 23:59:54.792267927 +0000 UTC m=+357.892420200 (delta=78.531244ms)
	I1209 23:59:54.900443   83859 fix.go:200] guest clock delta is within tolerance: 78.531244ms
	I1209 23:59:54.900448   83859 start.go:83] releasing machines lock for "no-preload-048296", held for 20.076230285s
	I1209 23:59:54.900466   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.900763   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:54.903437   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.903762   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.903785   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.903941   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.904412   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.904583   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.904674   83859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:59:54.904735   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.904788   83859 ssh_runner.go:195] Run: cat /version.json
	I1209 23:59:54.904815   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.907540   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.907693   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.907936   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.907960   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.908083   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.908098   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.908108   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.908269   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.908332   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.908431   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.908578   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.908607   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.908750   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.908753   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.980588   83859 ssh_runner.go:195] Run: systemctl --version
	I1209 23:59:55.003079   83859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:59:55.152159   83859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:59:55.158212   83859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:59:55.158284   83859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:59:55.177510   83859 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:59:55.177539   83859 start.go:495] detecting cgroup driver to use...
	I1209 23:59:55.177616   83859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:59:55.194262   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:59:55.210699   83859 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:59:55.210770   83859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:59:55.226308   83859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:59:55.242175   83859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:59:55.361845   83859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:59:55.500415   83859 docker.go:233] disabling docker service ...
	I1209 23:59:55.500487   83859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:59:55.515689   83859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:59:55.528651   83859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:59:55.663341   83859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:59:55.776773   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:59:55.790155   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:59:55.807749   83859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:59:55.807807   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.817580   83859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:59:55.817644   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.827975   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.837871   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.848118   83859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:59:55.858322   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.867754   83859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.884626   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.894254   83859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:59:55.903128   83859 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:59:55.903187   83859 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:59:55.914887   83859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:59:55.924665   83859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:56.028206   83859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:59:56.117484   83859 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:59:56.117573   83859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:59:56.122345   83859 start.go:563] Will wait 60s for crictl version
	I1209 23:59:56.122401   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.126032   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:59:56.161884   83859 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:59:56.161978   83859 ssh_runner.go:195] Run: crio --version
	I1209 23:59:56.194758   83859 ssh_runner.go:195] Run: crio --version
	I1209 23:59:56.224190   83859 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:59:56.225560   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:56.228550   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:56.228928   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:56.228950   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:56.229163   83859 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1209 23:59:56.233615   83859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:56.245890   83859 kubeadm.go:883] updating cluster {Name:no-preload-048296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-048296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:59:56.246079   83859 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:59:56.246132   83859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:56.285601   83859 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:59:56.285629   83859 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 23:59:56.285694   83859 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:56.285734   83859 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.285761   83859 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.285813   83859 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.285858   83859 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.285900   83859 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.285810   83859 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1209 23:59:56.285983   83859 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.287414   83859 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1209 23:59:56.287436   83859 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.287436   83859 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.287446   83859 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.287465   83859 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.287500   83859 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.287550   83859 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:56.287550   83859 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.437713   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.453590   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.457708   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.468937   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1209 23:59:56.474766   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.477321   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.483791   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.530686   83859 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1209 23:59:56.530735   83859 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.530786   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.603275   83859 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1209 23:59:56.603327   83859 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.603376   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.612591   83859 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1209 23:59:56.612635   83859 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.612686   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726597   83859 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1209 23:59:56.726643   83859 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.726650   83859 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1209 23:59:56.726682   83859 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.726692   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726728   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726752   83859 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1209 23:59:56.726777   83859 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.726808   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726824   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.726882   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.726889   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.795578   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.795624   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.795646   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.795581   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.795723   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.795847   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.905584   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.909282   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.927659   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.927832   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.927928   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.932487   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:53.256494   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:55.257888   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:54.991894   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:57.491500   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:55.645853   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.145844   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.644899   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:57.145431   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:57.645625   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:58.145096   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:58.645933   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:59.145181   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:59.645624   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:00.145874   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.971745   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1209 23:59:56.971844   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1209 23:59:56.987218   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1209 23:59:56.987380   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 23:59:57.050691   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:57.064778   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:57.087868   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:57.087972   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:57.089176   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1209 23:59:57.089235   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1209 23:59:57.089262   83859 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1209 23:59:57.089269   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 23:59:57.089311   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1209 23:59:57.089355   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1209 23:59:57.098939   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1209 23:59:57.099044   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 23:59:57.148482   83859 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1209 23:59:57.148532   83859 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:57.148588   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:57.173723   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1209 23:59:57.173810   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 23:59:57.186640   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1209 23:59:57.186734   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1210 00:00:00.723670   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.634331388s)
	I1210 00:00:00.723704   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1210 00:00:00.723717   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (3.634425647s)
	I1210 00:00:00.723751   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1210 00:00:00.723728   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 00:00:00.723767   83859 ssh_runner.go:235] Completed: which crictl: (3.575163822s)
	I1210 00:00:00.723815   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:00:00.723857   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (3.550033094s)
	I1210 00:00:00.723816   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 00:00:00.723909   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (3.537159613s)
	I1210 00:00:00.723934   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1210 00:00:00.723854   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (3.624679414s)
	I1210 00:00:00.723974   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1210 00:00:00.723880   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1209 23:59:57.756271   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:00.256451   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:02.256956   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:59.491859   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:01.992860   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:00.644886   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:01.145063   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:01.645932   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.145743   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.645396   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:03.145007   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:03.645927   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:04.145876   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:04.645825   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:05.145906   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.713826   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.989917623s)
	I1210 00:00:02.713866   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1210 00:00:02.713883   83859 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.99004494s)
	I1210 00:00:02.713896   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 00:00:02.713949   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:00:02.713952   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 00:00:02.753832   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:00:04.780306   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.066273178s)
	I1210 00:00:04.780344   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1210 00:00:04.780368   83859 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1210 00:00:04.780432   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1210 00:00:04.780430   83859 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.026560705s)
	I1210 00:00:04.780544   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 00:00:04.780618   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:00:06.756731   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.976269529s)
	I1210 00:00:06.756763   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1210 00:00:06.756774   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.976141757s)
	I1210 00:00:06.756793   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1210 00:00:06.756791   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 00:00:06.756846   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 00:00:04.258390   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:06.756422   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:04.490429   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:06.991054   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:08.991247   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:05.644919   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:06.145142   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:06.645240   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:07.144864   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:07.645218   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.145246   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.645965   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:09.145663   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:09.645731   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:10.145638   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.712114   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.955245193s)
	I1210 00:00:08.712142   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1210 00:00:08.712166   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 00:00:08.712212   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 00:00:10.892294   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.180058873s)
	I1210 00:00:10.892319   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1210 00:00:10.892352   83859 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:00:10.892391   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:00:11.542174   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 00:00:11.542214   83859 cache_images.go:123] Successfully loaded all cached images
	I1210 00:00:11.542219   83859 cache_images.go:92] duration metric: took 15.256564286s to LoadCachedImages
	I1210 00:00:11.542230   83859 kubeadm.go:934] updating node { 192.168.61.182 8443 v1.31.2 crio true true} ...
	I1210 00:00:11.542334   83859 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-048296 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-048296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:00:11.542397   83859 ssh_runner.go:195] Run: crio config
	I1210 00:00:11.586768   83859 cni.go:84] Creating CNI manager for ""
	I1210 00:00:11.586787   83859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:00:11.586795   83859 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:00:11.586817   83859 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.182 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-048296 NodeName:no-preload-048296 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:00:11.586932   83859 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-048296"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.182"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.182"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:00:11.586992   83859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:00:11.597936   83859 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:00:11.597996   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 00:00:11.608068   83859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 00:00:11.624881   83859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:00:11.640934   83859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1210 00:00:11.658113   83859 ssh_runner.go:195] Run: grep 192.168.61.182	control-plane.minikube.internal$ /etc/hosts
	I1210 00:00:11.662360   83859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:00:11.674732   83859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:00:11.803743   83859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:00:11.821169   83859 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296 for IP: 192.168.61.182
	I1210 00:00:11.821191   83859 certs.go:194] generating shared ca certs ...
	I1210 00:00:11.821211   83859 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:00:11.821404   83859 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1210 00:00:11.821485   83859 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1210 00:00:11.821502   83859 certs.go:256] generating profile certs ...
	I1210 00:00:11.821583   83859 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/client.key
	I1210 00:00:11.821644   83859 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/apiserver.key.9304569d
	I1210 00:00:11.821677   83859 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/proxy-client.key
	I1210 00:00:11.821783   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1210 00:00:11.821811   83859 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1210 00:00:11.821821   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:00:11.821848   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1210 00:00:11.821881   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:00:11.821917   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1210 00:00:11.821965   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1210 00:00:11.822649   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:00:11.867065   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 00:00:11.898744   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:00:11.932711   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:00:08.758664   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:11.257156   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:11.491011   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:13.491036   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:10.645462   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.145646   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.645921   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:12.145804   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:12.644924   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:13.145055   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:13.645811   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:14.144877   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:14.645709   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.145827   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.966670   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 00:00:11.997827   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:00:12.023344   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:00:12.048872   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:00:12.074332   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:00:12.097886   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1210 00:00:12.121883   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1210 00:00:12.145451   83859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:00:12.167089   83859 ssh_runner.go:195] Run: openssl version
	I1210 00:00:12.172747   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1210 00:00:12.183718   83859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1210 00:00:12.188458   83859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1210 00:00:12.188521   83859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1210 00:00:12.194537   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:00:12.205725   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:00:12.216441   83859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:00:12.221179   83859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:00:12.221237   83859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:00:12.226942   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:00:12.237830   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1210 00:00:12.248188   83859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1210 00:00:12.253689   83859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1210 00:00:12.253747   83859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1210 00:00:12.259326   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1210 00:00:12.269796   83859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:00:12.274316   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 00:00:12.280263   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 00:00:12.286235   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 00:00:12.292330   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 00:00:12.298347   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 00:00:12.304186   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 00:00:12.310189   83859 kubeadm.go:392] StartCluster: {Name:no-preload-048296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-048296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:00:12.310296   83859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:00:12.310349   83859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:00:12.346589   83859 cri.go:89] found id: ""
	I1210 00:00:12.346666   83859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 00:00:12.357674   83859 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 00:00:12.357701   83859 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 00:00:12.357753   83859 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 00:00:12.367817   83859 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 00:00:12.368827   83859 kubeconfig.go:125] found "no-preload-048296" server: "https://192.168.61.182:8443"
	I1210 00:00:12.371117   83859 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 00:00:12.380367   83859 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.182
	I1210 00:00:12.380393   83859 kubeadm.go:1160] stopping kube-system containers ...
	I1210 00:00:12.380404   83859 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 00:00:12.380446   83859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:00:12.413083   83859 cri.go:89] found id: ""
	I1210 00:00:12.413143   83859 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 00:00:12.429819   83859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:00:12.439244   83859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:00:12.439269   83859 kubeadm.go:157] found existing configuration files:
	
	I1210 00:00:12.439336   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:00:12.448182   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:00:12.448257   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:00:12.457759   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:00:12.467372   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:00:12.467449   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:00:12.476877   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:00:12.485831   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:00:12.485898   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:00:12.495537   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:00:12.504333   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:00:12.504398   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:00:12.514572   83859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:00:12.524134   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:12.619683   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.361844   83859 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.742129567s)
	I1210 00:00:14.361876   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.585241   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.653166   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.771204   83859 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:00:14.771306   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.272143   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.771676   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.789350   83859 api_server.go:72] duration metric: took 1.018150373s to wait for apiserver process to appear ...
	I1210 00:00:15.789378   83859 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:00:15.789396   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:15.789878   83859 api_server.go:269] stopped: https://192.168.61.182:8443/healthz: Get "https://192.168.61.182:8443/healthz": dial tcp 192.168.61.182:8443: connect: connection refused
	I1210 00:00:16.289518   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:13.757326   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:15.757843   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:15.491876   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:17.991160   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:18.572189   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 00:00:18.572233   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 00:00:18.572262   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:18.607691   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 00:00:18.607732   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 00:00:18.790007   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:18.794252   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 00:00:18.794281   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 00:00:19.289819   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:19.294079   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 00:00:19.294104   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 00:00:19.789647   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:19.794119   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 200:
	ok
	I1210 00:00:19.800447   83859 api_server.go:141] control plane version: v1.31.2
	I1210 00:00:19.800477   83859 api_server.go:131] duration metric: took 4.011091942s to wait for apiserver health ...
	I1210 00:00:19.800488   83859 cni.go:84] Creating CNI manager for ""
	I1210 00:00:19.800496   83859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:00:19.802341   83859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:00:15.645243   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:16.145027   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:16.645018   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:17.145001   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:17.644893   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:18.145189   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:18.645579   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.144946   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.645832   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:20.145634   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.803715   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:00:19.814324   83859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:00:19.861326   83859 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:00:19.873554   83859 system_pods.go:59] 8 kube-system pods found
	I1210 00:00:19.873588   83859 system_pods.go:61] "coredns-7c65d6cfc9-smnt7" [7d85cb49-3cbb-4133-acab-284ddf0b72f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 00:00:19.873597   83859 system_pods.go:61] "etcd-no-preload-048296" [3eeaede5-b759-49e5-8267-0aaf3c374178] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 00:00:19.873606   83859 system_pods.go:61] "kube-apiserver-no-preload-048296" [8e9d39ce-218a-4fe4-86ea-234d97a5e51d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 00:00:19.873615   83859 system_pods.go:61] "kube-controller-manager-no-preload-048296" [e28ccf7d-13e4-4b55-81d0-2dd348584bca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 00:00:19.873622   83859 system_pods.go:61] "kube-proxy-z479r" [eb6588a6-ac3c-4066-b69e-5613b03ade53] Running
	I1210 00:00:19.873634   83859 system_pods.go:61] "kube-scheduler-no-preload-048296" [0a184373-35d4-4ac6-92fb-65a4faf1c2e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 00:00:19.873646   83859 system_pods.go:61] "metrics-server-6867b74b74-sd58c" [19d64157-7b9b-4a39-8e0c-2c48729fbc93] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:00:19.873653   83859 system_pods.go:61] "storage-provisioner" [30b3ed5d-f843-4090-827f-e1ee6a7ea274] Running
	I1210 00:00:19.873661   83859 system_pods.go:74] duration metric: took 12.308897ms to wait for pod list to return data ...
	I1210 00:00:19.873668   83859 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:00:19.877459   83859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:00:19.877482   83859 node_conditions.go:123] node cpu capacity is 2
	I1210 00:00:19.877496   83859 node_conditions.go:105] duration metric: took 3.822698ms to run NodePressure ...
	I1210 00:00:19.877513   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:20.145838   83859 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 00:00:20.151038   83859 kubeadm.go:739] kubelet initialised
	I1210 00:00:20.151059   83859 kubeadm.go:740] duration metric: took 5.1997ms waiting for restarted kubelet to initialise ...
	I1210 00:00:20.151068   83859 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:00:20.157554   83859 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:20.162413   83859 pod_ready.go:98] node "no-preload-048296" hosting pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.162437   83859 pod_ready.go:82] duration metric: took 4.858113ms for pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace to be "Ready" ...
	E1210 00:00:20.162446   83859 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-048296" hosting pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.162452   83859 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:20.167285   83859 pod_ready.go:98] node "no-preload-048296" hosting pod "etcd-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.167312   83859 pod_ready.go:82] duration metric: took 4.84903ms for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	E1210 00:00:20.167321   83859 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-048296" hosting pod "etcd-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.167328   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:20.172365   83859 pod_ready.go:98] node "no-preload-048296" hosting pod "kube-apiserver-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.172386   83859 pod_ready.go:82] duration metric: took 5.052446ms for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	E1210 00:00:20.172395   83859 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-048296" hosting pod "kube-apiserver-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.172401   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:18.257283   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:20.756752   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:20.490469   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:22.990635   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:20.645094   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:21.145179   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:21.645829   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.145159   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.645390   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:23.145663   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:23.645205   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:24.145625   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:24.645299   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:25.144971   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.180104   83859 pod_ready.go:103] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:24.680524   83859 pod_ready.go:103] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:26.680818   83859 pod_ready.go:103] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:22.757621   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:25.257680   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:24.991719   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:27.490099   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:25.644977   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:26.145967   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:26.645297   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:27.144972   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:27.645030   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.145227   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.645104   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:29.144957   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:29.645516   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:30.145343   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.678941   83859 pod_ready.go:93] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:00:28.678972   83859 pod_ready.go:82] duration metric: took 8.506560777s for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:28.678985   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-z479r" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:28.684896   83859 pod_ready.go:93] pod "kube-proxy-z479r" in "kube-system" namespace has status "Ready":"True"
	I1210 00:00:28.684917   83859 pod_ready.go:82] duration metric: took 5.924915ms for pod "kube-proxy-z479r" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:28.684926   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:30.691918   83859 pod_ready.go:103] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:31.691333   83859 pod_ready.go:93] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:00:31.691362   83859 pod_ready.go:82] duration metric: took 3.006429416s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:31.691372   83859 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:27.756937   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:30.260931   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:29.990322   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:32.489835   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:30.645218   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:31.145393   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:31.645952   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:32.145695   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:32.644910   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.146012   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.645636   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:34.145664   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:34.645813   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:35.145365   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.697948   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:36.197360   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:32.756044   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:34.756585   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:36.756683   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:34.490676   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:36.491117   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:38.991231   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:35.645823   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:36.145410   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:36.645665   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:37.144994   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:37.645701   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.144880   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.645735   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:39.145806   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:39.645695   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:40.145692   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.198422   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:40.697971   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:39.256015   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:41.256607   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:41.490276   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:43.989182   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:40.644935   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:41.145320   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:41.645470   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:42.145714   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:42.644961   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:43.145687   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:43.644990   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:44.144958   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:44.645780   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:44.645870   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:44.680200   84547 cri.go:89] found id: ""
	I1210 00:00:44.680321   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.680334   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:44.680343   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:44.680413   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:44.715278   84547 cri.go:89] found id: ""
	I1210 00:00:44.715305   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.715312   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:44.715318   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:44.715377   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:44.749908   84547 cri.go:89] found id: ""
	I1210 00:00:44.749933   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.749941   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:44.749946   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:44.750008   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:44.786851   84547 cri.go:89] found id: ""
	I1210 00:00:44.786882   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.786893   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:44.786901   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:44.786966   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:44.830060   84547 cri.go:89] found id: ""
	I1210 00:00:44.830106   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.830117   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:44.830125   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:44.830191   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:44.865525   84547 cri.go:89] found id: ""
	I1210 00:00:44.865560   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.865571   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:44.865579   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:44.865643   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:44.902529   84547 cri.go:89] found id: ""
	I1210 00:00:44.902565   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.902575   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:44.902584   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:44.902647   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:44.938876   84547 cri.go:89] found id: ""
	I1210 00:00:44.938904   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.938914   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:44.938925   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:44.938939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:44.992533   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:44.992570   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:45.005990   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:45.006020   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:45.116774   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:45.116797   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:45.116810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:45.187376   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:45.187411   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:42.698555   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:45.198082   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:43.756559   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:45.756755   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:45.990399   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:47.991485   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:47.728964   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:47.742500   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:47.742560   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:47.775751   84547 cri.go:89] found id: ""
	I1210 00:00:47.775783   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.775793   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:47.775799   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:47.775848   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:47.810202   84547 cri.go:89] found id: ""
	I1210 00:00:47.810228   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.810235   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:47.810241   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:47.810302   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:47.844686   84547 cri.go:89] found id: ""
	I1210 00:00:47.844730   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.844739   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:47.844745   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:47.844802   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:47.884076   84547 cri.go:89] found id: ""
	I1210 00:00:47.884108   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.884119   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:47.884127   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:47.884188   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:47.923697   84547 cri.go:89] found id: ""
	I1210 00:00:47.923722   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.923729   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:47.923734   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:47.923791   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:47.955790   84547 cri.go:89] found id: ""
	I1210 00:00:47.955816   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.955824   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:47.955829   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:47.955890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:48.021501   84547 cri.go:89] found id: ""
	I1210 00:00:48.021529   84547 logs.go:282] 0 containers: []
	W1210 00:00:48.021537   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:48.021543   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:48.021592   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:48.054645   84547 cri.go:89] found id: ""
	I1210 00:00:48.054675   84547 logs.go:282] 0 containers: []
	W1210 00:00:48.054688   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:48.054699   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:48.054714   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:48.135706   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:48.135729   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:48.135746   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:48.212397   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:48.212438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:48.254002   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:48.254033   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:48.305858   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:48.305891   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:47.697503   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:49.698076   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:47.758210   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:50.257400   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:50.490507   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:52.989527   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:50.819644   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:50.834153   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:50.834233   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:50.872357   84547 cri.go:89] found id: ""
	I1210 00:00:50.872391   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.872402   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:50.872409   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:50.872471   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:50.909793   84547 cri.go:89] found id: ""
	I1210 00:00:50.909826   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.909839   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:50.909848   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:50.909915   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:50.947167   84547 cri.go:89] found id: ""
	I1210 00:00:50.947206   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.947219   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:50.947228   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:50.947304   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:50.987194   84547 cri.go:89] found id: ""
	I1210 00:00:50.987219   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.987228   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:50.987234   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:50.987287   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:51.027433   84547 cri.go:89] found id: ""
	I1210 00:00:51.027464   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.027476   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:51.027483   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:51.027586   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:51.062156   84547 cri.go:89] found id: ""
	I1210 00:00:51.062186   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.062200   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:51.062208   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:51.062267   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:51.099161   84547 cri.go:89] found id: ""
	I1210 00:00:51.099187   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.099195   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:51.099202   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:51.099249   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:51.134400   84547 cri.go:89] found id: ""
	I1210 00:00:51.134432   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.134445   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:51.134459   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:51.134474   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:51.171842   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:51.171869   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:51.222815   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:51.222854   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:51.236499   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:51.236537   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:51.307835   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:51.307856   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:51.307871   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:53.888771   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:53.902167   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:53.902234   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:53.940724   84547 cri.go:89] found id: ""
	I1210 00:00:53.940748   84547 logs.go:282] 0 containers: []
	W1210 00:00:53.940756   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:53.940767   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:53.940823   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:53.975076   84547 cri.go:89] found id: ""
	I1210 00:00:53.975111   84547 logs.go:282] 0 containers: []
	W1210 00:00:53.975122   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:53.975128   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:53.975191   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:54.011123   84547 cri.go:89] found id: ""
	I1210 00:00:54.011149   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.011157   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:54.011162   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:54.011207   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:54.043594   84547 cri.go:89] found id: ""
	I1210 00:00:54.043620   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.043628   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:54.043633   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:54.043679   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:54.076172   84547 cri.go:89] found id: ""
	I1210 00:00:54.076208   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.076219   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:54.076227   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:54.076292   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:54.110646   84547 cri.go:89] found id: ""
	I1210 00:00:54.110679   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.110691   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:54.110711   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:54.110784   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:54.145901   84547 cri.go:89] found id: ""
	I1210 00:00:54.145929   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.145940   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:54.145947   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:54.146007   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:54.178589   84547 cri.go:89] found id: ""
	I1210 00:00:54.178618   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.178629   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:54.178639   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:54.178652   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:54.231005   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:54.231040   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:54.244583   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:54.244608   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:54.318759   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:54.318787   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:54.318800   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:54.395975   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:54.396012   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:52.198012   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:54.199221   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:56.697929   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:52.756419   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:54.757807   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:57.257555   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:55.490621   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:57.491632   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:56.969699   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:56.985159   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:56.985232   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:57.020768   84547 cri.go:89] found id: ""
	I1210 00:00:57.020798   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.020807   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:57.020812   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:57.020861   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:57.055796   84547 cri.go:89] found id: ""
	I1210 00:00:57.055826   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.055834   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:57.055839   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:57.055897   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:57.089345   84547 cri.go:89] found id: ""
	I1210 00:00:57.089375   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.089385   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:57.089392   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:57.089460   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:57.123969   84547 cri.go:89] found id: ""
	I1210 00:00:57.123993   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.124004   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:57.124012   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:57.124075   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:57.158744   84547 cri.go:89] found id: ""
	I1210 00:00:57.158769   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.158777   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:57.158783   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:57.158842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:57.203933   84547 cri.go:89] found id: ""
	I1210 00:00:57.203954   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.203962   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:57.203968   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:57.204025   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:57.238180   84547 cri.go:89] found id: ""
	I1210 00:00:57.238213   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.238224   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:57.238231   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:57.238287   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:57.273583   84547 cri.go:89] found id: ""
	I1210 00:00:57.273612   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.273623   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:57.273633   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:57.273648   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:57.345992   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:57.346019   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:57.346035   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:57.428335   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:57.428369   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:57.466437   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:57.466472   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:57.517064   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:57.517099   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:00.030599   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:00.045151   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:00.045229   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:00.083050   84547 cri.go:89] found id: ""
	I1210 00:01:00.083074   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.083085   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:00.083093   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:00.083152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:00.119082   84547 cri.go:89] found id: ""
	I1210 00:01:00.119107   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.119120   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:00.119126   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:00.119185   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:00.166888   84547 cri.go:89] found id: ""
	I1210 00:01:00.166921   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.166931   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:00.166939   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:00.166998   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:00.209504   84547 cri.go:89] found id: ""
	I1210 00:01:00.209526   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.209533   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:00.209539   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:00.209595   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:00.247649   84547 cri.go:89] found id: ""
	I1210 00:01:00.247672   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.247680   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:00.247686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:00.247736   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:00.294419   84547 cri.go:89] found id: ""
	I1210 00:01:00.294445   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.294455   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:00.294463   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:00.294526   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:00.328634   84547 cri.go:89] found id: ""
	I1210 00:01:00.328667   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.328677   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:00.328684   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:00.328751   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:00.364667   84547 cri.go:89] found id: ""
	I1210 00:01:00.364696   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.364724   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:00.364733   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:00.364745   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:00.377518   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:00.377552   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:00.449145   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:00.449166   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:00.449178   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:58.698000   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:01.196968   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:59.756367   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:01.756407   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:59.989896   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:01.991121   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:00.529462   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:00.529499   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:00.570471   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:00.570503   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:03.122835   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:03.136329   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:03.136405   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:03.170352   84547 cri.go:89] found id: ""
	I1210 00:01:03.170380   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.170388   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:03.170393   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:03.170454   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:03.204339   84547 cri.go:89] found id: ""
	I1210 00:01:03.204368   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.204379   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:03.204386   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:03.204448   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:03.237516   84547 cri.go:89] found id: ""
	I1210 00:01:03.237559   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.237572   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:03.237579   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:03.237641   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:03.273379   84547 cri.go:89] found id: ""
	I1210 00:01:03.273407   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.273416   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:03.273421   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:03.274017   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:03.308987   84547 cri.go:89] found id: ""
	I1210 00:01:03.309026   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.309038   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:03.309046   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:03.309102   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:03.341913   84547 cri.go:89] found id: ""
	I1210 00:01:03.341943   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.341954   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:03.341961   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:03.342009   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:03.375403   84547 cri.go:89] found id: ""
	I1210 00:01:03.375429   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.375437   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:03.375442   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:03.375494   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:03.408352   84547 cri.go:89] found id: ""
	I1210 00:01:03.408380   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.408387   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:03.408397   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:03.408409   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:03.483789   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:03.483831   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:03.532021   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:03.532050   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:03.585095   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:03.585129   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:03.600062   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:03.600089   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:03.671633   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:03.197410   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:05.698110   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:03.757014   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:06.257259   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:04.490159   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:06.490493   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:08.991447   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:06.172345   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:06.199809   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:06.199897   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:06.240690   84547 cri.go:89] found id: ""
	I1210 00:01:06.240749   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.240760   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:06.240769   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:06.240822   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:06.275743   84547 cri.go:89] found id: ""
	I1210 00:01:06.275770   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.275779   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:06.275786   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:06.275848   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:06.310399   84547 cri.go:89] found id: ""
	I1210 00:01:06.310427   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.310438   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:06.310445   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:06.310507   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:06.345278   84547 cri.go:89] found id: ""
	I1210 00:01:06.345308   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.345320   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:06.345328   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:06.345394   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:06.381926   84547 cri.go:89] found id: ""
	I1210 00:01:06.381961   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.381988   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:06.381994   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:06.382064   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:06.415341   84547 cri.go:89] found id: ""
	I1210 00:01:06.415367   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.415377   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:06.415385   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:06.415438   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:06.449701   84547 cri.go:89] found id: ""
	I1210 00:01:06.449733   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.449743   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:06.449750   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:06.449814   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:06.484271   84547 cri.go:89] found id: ""
	I1210 00:01:06.484298   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.484307   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:06.484317   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:06.484333   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:06.570279   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:06.570313   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:06.607812   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:06.607845   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:06.658206   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:06.658243   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:06.670949   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:06.670978   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:06.745785   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:09.246367   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:09.259390   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:09.259462   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:09.291811   84547 cri.go:89] found id: ""
	I1210 00:01:09.291841   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.291853   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:09.291865   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:09.291922   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:09.326996   84547 cri.go:89] found id: ""
	I1210 00:01:09.327027   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.327038   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:09.327045   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:09.327109   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:09.361026   84547 cri.go:89] found id: ""
	I1210 00:01:09.361062   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.361073   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:09.361081   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:09.361152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:09.397116   84547 cri.go:89] found id: ""
	I1210 00:01:09.397148   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.397159   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:09.397166   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:09.397225   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:09.428989   84547 cri.go:89] found id: ""
	I1210 00:01:09.429025   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.429039   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:09.429046   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:09.429111   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:09.462491   84547 cri.go:89] found id: ""
	I1210 00:01:09.462535   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.462547   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:09.462572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:09.462652   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:09.502705   84547 cri.go:89] found id: ""
	I1210 00:01:09.502728   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.502735   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:09.502740   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:09.502798   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:09.538721   84547 cri.go:89] found id: ""
	I1210 00:01:09.538744   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.538754   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:09.538766   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:09.538779   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:09.590732   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:09.590768   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:09.604928   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:09.604960   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:09.681457   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:09.681484   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:09.681498   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:09.758116   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:09.758149   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:07.698904   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:10.198215   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:08.755738   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:10.756172   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:10.992252   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:13.491283   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:12.307016   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:12.320284   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:12.320371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:12.351984   84547 cri.go:89] found id: ""
	I1210 00:01:12.352007   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.352016   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:12.352024   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:12.352086   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:12.387797   84547 cri.go:89] found id: ""
	I1210 00:01:12.387829   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.387840   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:12.387848   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:12.387924   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:12.417934   84547 cri.go:89] found id: ""
	I1210 00:01:12.417968   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.417979   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:12.417987   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:12.418048   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:12.451771   84547 cri.go:89] found id: ""
	I1210 00:01:12.451802   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.451815   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:12.451822   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:12.451890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:12.485007   84547 cri.go:89] found id: ""
	I1210 00:01:12.485037   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.485048   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:12.485055   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:12.485117   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:12.527851   84547 cri.go:89] found id: ""
	I1210 00:01:12.527879   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.527895   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:12.527905   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:12.527967   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:12.560341   84547 cri.go:89] found id: ""
	I1210 00:01:12.560364   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.560372   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:12.560377   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:12.560428   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:12.593118   84547 cri.go:89] found id: ""
	I1210 00:01:12.593143   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.593150   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:12.593158   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:12.593169   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:12.658694   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:12.658717   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:12.658749   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:12.739742   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:12.739776   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:12.780949   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:12.780977   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:12.829570   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:12.829607   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:15.344239   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:15.358424   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:15.358527   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:15.390712   84547 cri.go:89] found id: ""
	I1210 00:01:15.390744   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.390757   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:15.390764   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:15.390847   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:15.424198   84547 cri.go:89] found id: ""
	I1210 00:01:15.424226   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.424234   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:15.424239   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:15.424306   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:15.458340   84547 cri.go:89] found id: ""
	I1210 00:01:15.458363   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.458370   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:15.458377   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:15.458422   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:15.491468   84547 cri.go:89] found id: ""
	I1210 00:01:15.491497   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.491507   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:15.491520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:15.491597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:12.698224   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:15.197841   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:12.756618   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:14.759492   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:17.256075   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:15.491494   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:17.989410   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:15.528405   84547 cri.go:89] found id: ""
	I1210 00:01:15.528437   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.528448   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:15.528455   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:15.528517   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:15.562966   84547 cri.go:89] found id: ""
	I1210 00:01:15.562995   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.563005   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:15.563012   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:15.563063   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:15.595815   84547 cri.go:89] found id: ""
	I1210 00:01:15.595838   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.595845   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:15.595850   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:15.595907   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:15.631296   84547 cri.go:89] found id: ""
	I1210 00:01:15.631322   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.631333   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:15.631347   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:15.631362   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:15.680177   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:15.680213   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:15.693685   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:15.693724   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:15.760285   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:15.760312   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:15.760326   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:15.837814   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:15.837855   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:18.377112   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:18.390167   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:18.390230   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:18.424095   84547 cri.go:89] found id: ""
	I1210 00:01:18.424127   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.424140   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:18.424150   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:18.424216   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:18.456840   84547 cri.go:89] found id: ""
	I1210 00:01:18.456868   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.456876   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:18.456882   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:18.456938   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:18.492109   84547 cri.go:89] found id: ""
	I1210 00:01:18.492134   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.492145   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:18.492153   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:18.492212   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:18.530445   84547 cri.go:89] found id: ""
	I1210 00:01:18.530472   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.530480   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:18.530486   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:18.530549   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:18.567036   84547 cri.go:89] found id: ""
	I1210 00:01:18.567060   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.567070   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:18.567077   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:18.567136   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:18.606829   84547 cri.go:89] found id: ""
	I1210 00:01:18.606853   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.606863   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:18.606870   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:18.606927   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:18.646032   84547 cri.go:89] found id: ""
	I1210 00:01:18.646061   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.646070   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:18.646075   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:18.646127   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:18.683969   84547 cri.go:89] found id: ""
	I1210 00:01:18.683997   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.684008   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:18.684019   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:18.684036   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:18.735760   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:18.735807   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:18.751689   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:18.751724   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:18.823252   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:18.823272   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:18.823287   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:18.908071   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:18.908110   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:17.698577   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:20.197901   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:19.757398   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:22.255920   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:20.490512   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:22.990168   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:21.445415   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:21.458091   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:21.458152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:21.492615   84547 cri.go:89] found id: ""
	I1210 00:01:21.492646   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.492657   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:21.492664   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:21.492718   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:21.529564   84547 cri.go:89] found id: ""
	I1210 00:01:21.529586   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.529594   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:21.529599   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:21.529669   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:21.566409   84547 cri.go:89] found id: ""
	I1210 00:01:21.566441   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.566450   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:21.566456   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:21.566528   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:21.601783   84547 cri.go:89] found id: ""
	I1210 00:01:21.601815   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.601827   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:21.601835   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:21.601895   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:21.635740   84547 cri.go:89] found id: ""
	I1210 00:01:21.635762   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.635769   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:21.635775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:21.635831   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:21.667777   84547 cri.go:89] found id: ""
	I1210 00:01:21.667806   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.667826   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:21.667834   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:21.667894   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:21.704355   84547 cri.go:89] found id: ""
	I1210 00:01:21.704380   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.704388   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:21.704398   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:21.704457   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:21.737365   84547 cri.go:89] found id: ""
	I1210 00:01:21.737398   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.737410   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:21.737422   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:21.737438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:21.751394   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:21.751434   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:21.823217   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:21.823240   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:21.823255   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:21.910097   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:21.910131   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:21.950087   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:21.950123   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:24.504626   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:24.517920   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:24.517997   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:24.551766   84547 cri.go:89] found id: ""
	I1210 00:01:24.551806   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.551814   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:24.551821   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:24.551880   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:24.588210   84547 cri.go:89] found id: ""
	I1210 00:01:24.588246   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.588256   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:24.588263   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:24.588341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:24.620569   84547 cri.go:89] found id: ""
	I1210 00:01:24.620598   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.620607   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:24.620613   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:24.620673   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:24.657527   84547 cri.go:89] found id: ""
	I1210 00:01:24.657550   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.657558   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:24.657564   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:24.657636   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:24.689382   84547 cri.go:89] found id: ""
	I1210 00:01:24.689410   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.689418   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:24.689423   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:24.689475   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:24.727170   84547 cri.go:89] found id: ""
	I1210 00:01:24.727207   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.727224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:24.727230   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:24.727280   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:24.760733   84547 cri.go:89] found id: ""
	I1210 00:01:24.760759   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.760769   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:24.760775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:24.760842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:24.794750   84547 cri.go:89] found id: ""
	I1210 00:01:24.794782   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.794791   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:24.794799   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:24.794810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:24.847403   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:24.847441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:24.862240   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:24.862273   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:24.936373   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:24.936396   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:24.936409   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:25.011126   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:25.011160   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:22.198527   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:24.697981   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:24.257466   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:26.258086   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:24.990792   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:26.991843   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:27.551134   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:27.564526   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:27.564665   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:27.600652   84547 cri.go:89] found id: ""
	I1210 00:01:27.600677   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.600685   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:27.600690   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:27.600753   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:27.643593   84547 cri.go:89] found id: ""
	I1210 00:01:27.643625   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.643636   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:27.643649   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:27.643705   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:27.680633   84547 cri.go:89] found id: ""
	I1210 00:01:27.680664   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.680676   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:27.680684   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:27.680748   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:27.721790   84547 cri.go:89] found id: ""
	I1210 00:01:27.721824   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.721832   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:27.721837   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:27.721901   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:27.756462   84547 cri.go:89] found id: ""
	I1210 00:01:27.756492   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.756504   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:27.756512   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:27.756574   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:27.792692   84547 cri.go:89] found id: ""
	I1210 00:01:27.792728   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.792838   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:27.792851   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:27.792912   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:27.830065   84547 cri.go:89] found id: ""
	I1210 00:01:27.830098   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.830107   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:27.830116   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:27.830182   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:27.867516   84547 cri.go:89] found id: ""
	I1210 00:01:27.867557   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.867580   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:27.867595   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:27.867611   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:27.917622   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:27.917660   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:27.931687   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:27.931717   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:28.003827   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:28.003860   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:28.003876   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:28.081375   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:28.081410   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:27.197999   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:29.198364   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:31.698061   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:28.755855   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:30.756687   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:29.489992   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:31.490850   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:33.990464   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:30.622885   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:30.635990   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:30.636065   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:30.671418   84547 cri.go:89] found id: ""
	I1210 00:01:30.671453   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.671465   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:30.671475   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:30.671548   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:30.707168   84547 cri.go:89] found id: ""
	I1210 00:01:30.707203   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.707214   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:30.707222   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:30.707275   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:30.741349   84547 cri.go:89] found id: ""
	I1210 00:01:30.741377   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.741386   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:30.741395   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:30.741449   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:30.774952   84547 cri.go:89] found id: ""
	I1210 00:01:30.774985   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.774997   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:30.775005   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:30.775062   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:30.809596   84547 cri.go:89] found id: ""
	I1210 00:01:30.809623   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.809631   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:30.809636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:30.809699   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:30.844256   84547 cri.go:89] found id: ""
	I1210 00:01:30.844288   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.844300   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:30.844308   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:30.844371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:30.876086   84547 cri.go:89] found id: ""
	I1210 00:01:30.876113   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.876124   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:30.876131   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:30.876195   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:30.910858   84547 cri.go:89] found id: ""
	I1210 00:01:30.910884   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.910895   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:30.910905   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:30.910920   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:30.990855   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:30.990876   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:30.990888   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:31.069186   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:31.069221   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:31.109653   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:31.109689   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:31.164962   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:31.165001   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:33.679784   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:33.692222   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:33.692310   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:33.727937   84547 cri.go:89] found id: ""
	I1210 00:01:33.727960   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.727967   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:33.727973   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:33.728022   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:33.762122   84547 cri.go:89] found id: ""
	I1210 00:01:33.762151   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.762162   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:33.762171   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:33.762237   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:33.793855   84547 cri.go:89] found id: ""
	I1210 00:01:33.793883   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.793894   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:33.793903   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:33.793961   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:33.827240   84547 cri.go:89] found id: ""
	I1210 00:01:33.827280   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.827292   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:33.827300   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:33.827366   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:33.863705   84547 cri.go:89] found id: ""
	I1210 00:01:33.863728   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.863738   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:33.863743   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:33.863792   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:33.897189   84547 cri.go:89] found id: ""
	I1210 00:01:33.897213   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.897224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:33.897229   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:33.897282   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:33.930001   84547 cri.go:89] found id: ""
	I1210 00:01:33.930034   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.930044   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:33.930052   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:33.930113   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:33.967330   84547 cri.go:89] found id: ""
	I1210 00:01:33.967359   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.967367   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:33.967378   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:33.967390   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:34.051952   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:34.051996   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:34.087729   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:34.087762   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:34.137879   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:34.137915   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:34.151989   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:34.152025   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:34.225587   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:33.698633   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:36.197023   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:33.257021   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:35.258050   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:36.489900   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:38.490643   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:36.726065   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:36.740513   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:36.740594   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:36.774959   84547 cri.go:89] found id: ""
	I1210 00:01:36.774991   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.775000   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:36.775005   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:36.775070   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:36.808888   84547 cri.go:89] found id: ""
	I1210 00:01:36.808921   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.808934   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:36.808941   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:36.809001   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:36.841695   84547 cri.go:89] found id: ""
	I1210 00:01:36.841727   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.841738   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:36.841748   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:36.841840   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:36.877479   84547 cri.go:89] found id: ""
	I1210 00:01:36.877509   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.877522   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:36.877531   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:36.877600   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:36.909232   84547 cri.go:89] found id: ""
	I1210 00:01:36.909257   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.909265   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:36.909271   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:36.909328   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:36.945858   84547 cri.go:89] found id: ""
	I1210 00:01:36.945892   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.945904   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:36.945912   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:36.945981   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:36.978559   84547 cri.go:89] found id: ""
	I1210 00:01:36.978592   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.978604   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:36.978611   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:36.978674   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:37.015523   84547 cri.go:89] found id: ""
	I1210 00:01:37.015555   84547 logs.go:282] 0 containers: []
	W1210 00:01:37.015587   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:37.015598   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:37.015613   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:37.094825   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:37.094876   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:37.134442   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:37.134470   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:37.184691   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:37.184728   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:37.198801   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:37.198828   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:37.268415   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:39.768790   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:39.783106   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:39.783184   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:39.816629   84547 cri.go:89] found id: ""
	I1210 00:01:39.816655   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.816665   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:39.816672   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:39.816749   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:39.851518   84547 cri.go:89] found id: ""
	I1210 00:01:39.851550   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.851581   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:39.851590   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:39.851648   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:39.887790   84547 cri.go:89] found id: ""
	I1210 00:01:39.887827   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.887842   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:39.887852   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:39.887924   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:39.922230   84547 cri.go:89] found id: ""
	I1210 00:01:39.922255   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.922262   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:39.922268   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:39.922332   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:39.957142   84547 cri.go:89] found id: ""
	I1210 00:01:39.957174   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.957184   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:39.957192   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:39.957254   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:40.004583   84547 cri.go:89] found id: ""
	I1210 00:01:40.004609   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.004618   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:40.004624   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:40.004675   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:40.038278   84547 cri.go:89] found id: ""
	I1210 00:01:40.038300   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.038308   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:40.038313   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:40.038366   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:40.071920   84547 cri.go:89] found id: ""
	I1210 00:01:40.071947   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.071954   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:40.071963   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:40.071973   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:40.142005   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:40.142033   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:40.142049   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:40.219413   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:40.219452   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:40.260786   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:40.260822   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:40.315943   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:40.315988   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:38.198247   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:40.198455   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:37.756243   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:39.756567   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:41.757107   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:40.990316   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:42.990948   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:42.832592   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:42.847532   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:42.847630   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:42.879313   84547 cri.go:89] found id: ""
	I1210 00:01:42.879341   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.879348   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:42.879354   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:42.879403   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:42.914839   84547 cri.go:89] found id: ""
	I1210 00:01:42.914866   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.914877   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:42.914884   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:42.914947   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:42.949514   84547 cri.go:89] found id: ""
	I1210 00:01:42.949553   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.949564   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:42.949572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:42.949632   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:42.987974   84547 cri.go:89] found id: ""
	I1210 00:01:42.988004   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.988021   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:42.988029   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:42.988087   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:43.022835   84547 cri.go:89] found id: ""
	I1210 00:01:43.022860   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.022867   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:43.022873   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:43.022921   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:43.055940   84547 cri.go:89] found id: ""
	I1210 00:01:43.055967   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.055975   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:43.055981   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:43.056030   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:43.088728   84547 cri.go:89] found id: ""
	I1210 00:01:43.088754   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.088762   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:43.088769   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:43.088827   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:43.122809   84547 cri.go:89] found id: ""
	I1210 00:01:43.122842   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.122853   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:43.122865   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:43.122881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:43.172243   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:43.172277   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:43.186566   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:43.186596   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:43.264301   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:43.264327   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:43.264341   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:43.339804   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:43.339848   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:42.698066   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:45.197813   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:44.256069   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:46.256746   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:45.490253   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:47.989687   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:45.881356   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:45.894702   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:45.894779   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:45.928927   84547 cri.go:89] found id: ""
	I1210 00:01:45.928951   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.928958   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:45.928964   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:45.929009   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:45.965271   84547 cri.go:89] found id: ""
	I1210 00:01:45.965303   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.965315   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:45.965323   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:45.965392   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:45.999059   84547 cri.go:89] found id: ""
	I1210 00:01:45.999082   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.999090   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:45.999095   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:45.999140   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:46.034425   84547 cri.go:89] found id: ""
	I1210 00:01:46.034456   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.034468   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:46.034476   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:46.034529   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:46.067946   84547 cri.go:89] found id: ""
	I1210 00:01:46.067970   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.067986   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:46.067993   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:46.068056   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:46.101671   84547 cri.go:89] found id: ""
	I1210 00:01:46.101699   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.101710   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:46.101718   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:46.101783   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:46.135860   84547 cri.go:89] found id: ""
	I1210 00:01:46.135885   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.135893   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:46.135898   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:46.135948   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:46.177398   84547 cri.go:89] found id: ""
	I1210 00:01:46.177439   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.177450   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:46.177461   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:46.177476   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:46.248134   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:46.248160   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:46.248175   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:46.323652   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:46.323688   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:46.364017   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:46.364044   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:46.429480   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:46.429524   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:48.956518   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:48.970209   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:48.970294   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:49.008933   84547 cri.go:89] found id: ""
	I1210 00:01:49.008966   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.008977   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:49.008986   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:49.009050   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:49.044802   84547 cri.go:89] found id: ""
	I1210 00:01:49.044833   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.044843   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:49.044850   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:49.044921   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:49.077484   84547 cri.go:89] found id: ""
	I1210 00:01:49.077517   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.077525   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:49.077530   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:49.077576   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:49.110144   84547 cri.go:89] found id: ""
	I1210 00:01:49.110174   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.110186   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:49.110193   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:49.110255   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:49.142591   84547 cri.go:89] found id: ""
	I1210 00:01:49.142622   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.142633   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:49.142646   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:49.142709   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:49.175456   84547 cri.go:89] found id: ""
	I1210 00:01:49.175486   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.175497   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:49.175505   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:49.175603   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:49.208341   84547 cri.go:89] found id: ""
	I1210 00:01:49.208368   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.208379   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:49.208387   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:49.208445   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:49.241486   84547 cri.go:89] found id: ""
	I1210 00:01:49.241509   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.241518   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:49.241615   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:49.241647   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:49.280023   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:49.280051   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:49.328822   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:49.328858   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:49.343076   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:49.343104   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:49.418010   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:49.418037   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:49.418051   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:47.198917   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:49.697840   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:48.757230   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:51.256927   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:49.990701   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:52.490649   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:52.004053   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:52.017350   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:52.017418   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:52.055617   84547 cri.go:89] found id: ""
	I1210 00:01:52.055641   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.055648   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:52.055654   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:52.055712   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:52.088601   84547 cri.go:89] found id: ""
	I1210 00:01:52.088629   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.088637   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:52.088642   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:52.088694   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:52.121880   84547 cri.go:89] found id: ""
	I1210 00:01:52.121912   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.121922   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:52.121928   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:52.121986   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:52.161281   84547 cri.go:89] found id: ""
	I1210 00:01:52.161321   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.161334   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:52.161341   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:52.161406   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:52.222768   84547 cri.go:89] found id: ""
	I1210 00:01:52.222793   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.222800   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:52.222806   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:52.222862   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:52.271441   84547 cri.go:89] found id: ""
	I1210 00:01:52.271465   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.271473   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:52.271479   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:52.271526   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:52.311128   84547 cri.go:89] found id: ""
	I1210 00:01:52.311152   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.311160   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:52.311165   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:52.311211   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:52.348862   84547 cri.go:89] found id: ""
	I1210 00:01:52.348885   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.348892   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:52.348900   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:52.348913   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:52.401280   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:52.401324   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:52.415532   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:52.415580   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:52.484956   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:52.484979   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:52.484994   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:52.565102   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:52.565137   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:55.106446   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:55.121756   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:55.121846   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:55.158601   84547 cri.go:89] found id: ""
	I1210 00:01:55.158632   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.158643   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:55.158650   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:55.158712   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:55.192424   84547 cri.go:89] found id: ""
	I1210 00:01:55.192454   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.192464   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:55.192471   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:55.192530   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:55.226178   84547 cri.go:89] found id: ""
	I1210 00:01:55.226204   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.226213   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:55.226222   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:55.226285   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:55.264123   84547 cri.go:89] found id: ""
	I1210 00:01:55.264148   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.264161   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:55.264169   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:55.264226   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:55.302476   84547 cri.go:89] found id: ""
	I1210 00:01:55.302503   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.302512   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:55.302520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:55.302597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:55.342308   84547 cri.go:89] found id: ""
	I1210 00:01:55.342341   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.342352   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:55.342360   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:55.342419   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:55.382203   84547 cri.go:89] found id: ""
	I1210 00:01:55.382226   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.382232   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:55.382238   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:55.382286   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:55.421381   84547 cri.go:89] found id: ""
	I1210 00:01:55.421409   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.421421   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:55.421432   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:55.421449   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:55.473758   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:55.473793   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:55.488138   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:55.488166   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:01:52.198005   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:54.697589   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:56.697748   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:53.756310   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:55.756964   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:54.990271   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:57.490303   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	W1210 00:01:55.567216   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:55.567240   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:55.567251   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:55.648276   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:55.648319   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:58.185245   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:58.199015   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:58.199091   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:58.232327   84547 cri.go:89] found id: ""
	I1210 00:01:58.232352   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.232360   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:58.232368   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:58.232436   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:58.270312   84547 cri.go:89] found id: ""
	I1210 00:01:58.270342   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.270353   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:58.270360   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:58.270420   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:58.303389   84547 cri.go:89] found id: ""
	I1210 00:01:58.303415   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.303422   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:58.303427   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:58.303486   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:58.338703   84547 cri.go:89] found id: ""
	I1210 00:01:58.338735   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.338747   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:58.338755   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:58.338817   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:58.376727   84547 cri.go:89] found id: ""
	I1210 00:01:58.376759   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.376770   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:58.376779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:58.376841   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:58.410192   84547 cri.go:89] found id: ""
	I1210 00:01:58.410217   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.410224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:58.410230   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:58.410286   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:58.449772   84547 cri.go:89] found id: ""
	I1210 00:01:58.449794   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.449802   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:58.449807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:58.449859   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:58.484285   84547 cri.go:89] found id: ""
	I1210 00:01:58.484316   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.484328   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:58.484339   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:58.484356   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:58.538402   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:58.538438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:58.551361   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:58.551391   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:58.613809   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:58.613836   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:58.613850   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:58.689606   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:58.689640   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:58.698283   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:01.197599   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:57.758229   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:00.256339   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:59.490413   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:01.491035   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:03.990725   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:01.230924   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:01.244878   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:01.244990   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:01.280475   84547 cri.go:89] found id: ""
	I1210 00:02:01.280501   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.280509   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:01.280514   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:01.280561   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:01.323042   84547 cri.go:89] found id: ""
	I1210 00:02:01.323065   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.323071   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:01.323077   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:01.323124   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:01.356146   84547 cri.go:89] found id: ""
	I1210 00:02:01.356171   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.356181   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:01.356190   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:01.356247   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:01.392542   84547 cri.go:89] found id: ""
	I1210 00:02:01.392567   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.392577   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:01.392584   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:01.392721   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:01.426232   84547 cri.go:89] found id: ""
	I1210 00:02:01.426257   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.426268   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:01.426275   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:01.426341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:01.459754   84547 cri.go:89] found id: ""
	I1210 00:02:01.459786   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.459798   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:01.459806   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:01.459865   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:01.492406   84547 cri.go:89] found id: ""
	I1210 00:02:01.492435   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.492445   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:01.492450   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:01.492499   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:01.532012   84547 cri.go:89] found id: ""
	I1210 00:02:01.532034   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.532042   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:01.532049   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:01.532060   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:01.583145   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:01.583181   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:01.596910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:01.596939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:01.670480   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:01.670506   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:01.670534   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:01.748001   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:01.748041   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:04.291065   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:04.304507   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:04.304587   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:04.341673   84547 cri.go:89] found id: ""
	I1210 00:02:04.341700   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.341713   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:04.341720   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:04.341772   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:04.379743   84547 cri.go:89] found id: ""
	I1210 00:02:04.379776   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.379787   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:04.379795   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:04.379856   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:04.413532   84547 cri.go:89] found id: ""
	I1210 00:02:04.413562   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.413573   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:04.413588   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:04.413648   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:04.449192   84547 cri.go:89] found id: ""
	I1210 00:02:04.449221   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.449231   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:04.449238   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:04.449324   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:04.484638   84547 cri.go:89] found id: ""
	I1210 00:02:04.484666   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.484677   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:04.484686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:04.484745   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:04.524854   84547 cri.go:89] found id: ""
	I1210 00:02:04.524889   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.524903   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:04.524912   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:04.524976   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:04.564697   84547 cri.go:89] found id: ""
	I1210 00:02:04.564726   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.564737   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:04.564748   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:04.564797   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:04.603506   84547 cri.go:89] found id: ""
	I1210 00:02:04.603534   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.603544   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:04.603554   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:04.603583   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:04.653025   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:04.653062   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:04.666833   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:04.666878   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:04.745491   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:04.745513   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:04.745526   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:04.825267   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:04.825304   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:03.702878   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:06.197834   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:02.755541   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:04.757334   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:07.256160   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:06.490491   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:08.491144   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:07.365419   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:07.378968   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:07.379030   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:07.412830   84547 cri.go:89] found id: ""
	I1210 00:02:07.412859   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.412868   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:07.412873   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:07.412938   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:07.445622   84547 cri.go:89] found id: ""
	I1210 00:02:07.445661   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.445674   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:07.445682   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:07.445742   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:07.480431   84547 cri.go:89] found id: ""
	I1210 00:02:07.480466   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.480474   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:07.480480   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:07.480533   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:07.518741   84547 cri.go:89] found id: ""
	I1210 00:02:07.518776   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.518790   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:07.518797   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:07.518860   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:07.552193   84547 cri.go:89] found id: ""
	I1210 00:02:07.552216   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.552223   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:07.552229   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:07.552275   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:07.585762   84547 cri.go:89] found id: ""
	I1210 00:02:07.585784   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.585792   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:07.585798   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:07.585843   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:07.618619   84547 cri.go:89] found id: ""
	I1210 00:02:07.618645   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.618653   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:07.618659   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:07.618709   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:07.653377   84547 cri.go:89] found id: ""
	I1210 00:02:07.653418   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.653428   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:07.653440   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:07.653456   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:07.709366   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:07.709401   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:07.723762   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:07.723792   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:07.804849   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:07.804869   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:07.804886   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:07.887117   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:07.887154   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:10.423120   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:10.436563   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:10.436628   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:10.470622   84547 cri.go:89] found id: ""
	I1210 00:02:10.470650   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.470658   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:10.470664   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:10.470735   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:10.506211   84547 cri.go:89] found id: ""
	I1210 00:02:10.506238   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.506250   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:10.506257   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:10.506368   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:08.198217   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:10.699364   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:09.256492   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:11.256897   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:10.990662   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:13.491635   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:10.541846   84547 cri.go:89] found id: ""
	I1210 00:02:10.541871   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.541879   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:10.541885   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:10.541952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:10.581391   84547 cri.go:89] found id: ""
	I1210 00:02:10.581416   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.581427   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:10.581435   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:10.581503   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:10.615172   84547 cri.go:89] found id: ""
	I1210 00:02:10.615206   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.615216   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:10.615223   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:10.615289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:10.650791   84547 cri.go:89] found id: ""
	I1210 00:02:10.650813   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.650821   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:10.650826   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:10.650876   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:10.685428   84547 cri.go:89] found id: ""
	I1210 00:02:10.685452   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.685460   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:10.685466   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:10.685524   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:10.719139   84547 cri.go:89] found id: ""
	I1210 00:02:10.719174   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.719186   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:10.719196   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:10.719211   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:10.732045   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:10.732073   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:10.805084   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:10.805111   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:10.805127   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:10.888301   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:10.888337   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:10.926005   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:10.926033   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:13.479317   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:13.494021   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:13.494089   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:13.527753   84547 cri.go:89] found id: ""
	I1210 00:02:13.527787   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.527799   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:13.527806   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:13.527862   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:13.560574   84547 cri.go:89] found id: ""
	I1210 00:02:13.560607   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.560618   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:13.560625   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:13.560688   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:13.595507   84547 cri.go:89] found id: ""
	I1210 00:02:13.595551   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.595584   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:13.595592   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:13.595657   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:13.629848   84547 cri.go:89] found id: ""
	I1210 00:02:13.629873   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.629884   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:13.629892   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:13.629952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:13.662407   84547 cri.go:89] found id: ""
	I1210 00:02:13.662436   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.662447   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:13.662454   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:13.662509   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:13.694892   84547 cri.go:89] found id: ""
	I1210 00:02:13.694921   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.694940   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:13.694949   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:13.695013   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:13.733248   84547 cri.go:89] found id: ""
	I1210 00:02:13.733327   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.733349   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:13.733358   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:13.733426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:13.771853   84547 cri.go:89] found id: ""
	I1210 00:02:13.771884   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.771894   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:13.771906   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:13.771920   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:13.846886   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:13.846913   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:13.846928   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:13.929722   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:13.929758   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:13.968401   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:13.968427   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:14.019770   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:14.019811   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:13.197726   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:15.198437   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:13.257271   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:15.755750   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:15.990729   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:18.490851   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:16.532794   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:16.547084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:16.547172   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:16.584046   84547 cri.go:89] found id: ""
	I1210 00:02:16.584073   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.584084   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:16.584091   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:16.584150   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:16.615981   84547 cri.go:89] found id: ""
	I1210 00:02:16.616012   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.616023   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:16.616030   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:16.616094   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:16.647939   84547 cri.go:89] found id: ""
	I1210 00:02:16.647967   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.647979   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:16.647986   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:16.648048   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:16.682585   84547 cri.go:89] found id: ""
	I1210 00:02:16.682620   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.682632   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:16.682640   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:16.682695   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:16.718593   84547 cri.go:89] found id: ""
	I1210 00:02:16.718620   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.718628   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:16.718634   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:16.718687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:16.752513   84547 cri.go:89] found id: ""
	I1210 00:02:16.752536   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.752543   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:16.752549   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:16.752598   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:16.787679   84547 cri.go:89] found id: ""
	I1210 00:02:16.787702   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.787710   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:16.787715   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:16.787777   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:16.821275   84547 cri.go:89] found id: ""
	I1210 00:02:16.821297   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.821305   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:16.821312   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:16.821322   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:16.872500   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:16.872533   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:16.885185   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:16.885212   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:16.962658   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:16.962679   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:16.962694   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:17.039689   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:17.039726   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:19.578060   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:19.590601   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:19.590675   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:19.622145   84547 cri.go:89] found id: ""
	I1210 00:02:19.622170   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.622179   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:19.622184   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:19.622231   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:19.653204   84547 cri.go:89] found id: ""
	I1210 00:02:19.653232   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.653243   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:19.653250   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:19.653317   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:19.685104   84547 cri.go:89] found id: ""
	I1210 00:02:19.685137   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.685148   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:19.685156   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:19.685213   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:19.719087   84547 cri.go:89] found id: ""
	I1210 00:02:19.719113   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.719121   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:19.719126   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:19.719176   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:19.753210   84547 cri.go:89] found id: ""
	I1210 00:02:19.753239   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.753250   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:19.753258   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:19.753317   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:19.787608   84547 cri.go:89] found id: ""
	I1210 00:02:19.787635   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.787645   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:19.787653   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:19.787718   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:19.823111   84547 cri.go:89] found id: ""
	I1210 00:02:19.823142   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.823154   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:19.823161   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:19.823221   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:19.858274   84547 cri.go:89] found id: ""
	I1210 00:02:19.858301   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.858312   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:19.858323   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:19.858344   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:19.905386   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:19.905420   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:19.918995   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:19.919026   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:19.990676   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:19.990700   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:19.990716   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:20.064396   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:20.064435   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:17.698109   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:20.197963   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:17.756633   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:19.757487   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:22.257036   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:20.990682   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:23.490383   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:22.604477   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:22.617408   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:22.617487   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:22.650144   84547 cri.go:89] found id: ""
	I1210 00:02:22.650178   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.650189   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:22.650197   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:22.650293   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:22.684342   84547 cri.go:89] found id: ""
	I1210 00:02:22.684367   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.684375   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:22.684380   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:22.684429   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:22.718168   84547 cri.go:89] found id: ""
	I1210 00:02:22.718194   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.718204   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:22.718211   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:22.718271   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:22.755192   84547 cri.go:89] found id: ""
	I1210 00:02:22.755222   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.755232   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:22.755240   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:22.755297   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:22.793095   84547 cri.go:89] found id: ""
	I1210 00:02:22.793129   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.793141   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:22.793149   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:22.793209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:22.831116   84547 cri.go:89] found id: ""
	I1210 00:02:22.831146   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.831157   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:22.831164   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:22.831235   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:22.869652   84547 cri.go:89] found id: ""
	I1210 00:02:22.869686   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.869704   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:22.869709   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:22.869756   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:22.907480   84547 cri.go:89] found id: ""
	I1210 00:02:22.907504   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.907513   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:22.907520   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:22.907533   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:22.983880   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:22.983902   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:22.983915   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:23.062840   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:23.062880   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:23.101427   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:23.101476   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:23.153861   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:23.153893   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:22.198638   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:24.698708   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:24.757526   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:27.256296   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:25.491835   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:27.990755   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:25.666908   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:25.680047   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:25.680125   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:25.715821   84547 cri.go:89] found id: ""
	I1210 00:02:25.715853   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.715865   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:25.715873   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:25.715931   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:25.748191   84547 cri.go:89] found id: ""
	I1210 00:02:25.748222   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.748232   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:25.748239   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:25.748295   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:25.786474   84547 cri.go:89] found id: ""
	I1210 00:02:25.786498   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.786505   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:25.786511   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:25.786569   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:25.819282   84547 cri.go:89] found id: ""
	I1210 00:02:25.819319   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.819330   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:25.819337   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:25.819400   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:25.858064   84547 cri.go:89] found id: ""
	I1210 00:02:25.858091   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.858100   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:25.858106   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:25.858169   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:25.893335   84547 cri.go:89] found id: ""
	I1210 00:02:25.893362   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.893373   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:25.893380   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:25.893439   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:25.927225   84547 cri.go:89] found id: ""
	I1210 00:02:25.927254   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.927265   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:25.927272   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:25.927341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:25.963678   84547 cri.go:89] found id: ""
	I1210 00:02:25.963715   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.963725   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:25.963738   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:25.963756   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:25.994462   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:25.994488   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:26.061394   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:26.061442   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:26.061458   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:26.135152   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:26.135187   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:26.171961   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:26.171994   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:28.723326   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:28.735721   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:28.735787   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:28.769493   84547 cri.go:89] found id: ""
	I1210 00:02:28.769516   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.769525   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:28.769530   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:28.769582   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:28.805594   84547 cri.go:89] found id: ""
	I1210 00:02:28.805641   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.805652   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:28.805658   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:28.805704   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:28.838982   84547 cri.go:89] found id: ""
	I1210 00:02:28.839007   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.839015   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:28.839020   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:28.839072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:28.876607   84547 cri.go:89] found id: ""
	I1210 00:02:28.876627   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.876635   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:28.876641   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:28.876700   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:28.910352   84547 cri.go:89] found id: ""
	I1210 00:02:28.910379   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.910386   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:28.910392   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:28.910464   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:28.943896   84547 cri.go:89] found id: ""
	I1210 00:02:28.943917   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.943924   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:28.943930   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:28.943978   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:28.976554   84547 cri.go:89] found id: ""
	I1210 00:02:28.976583   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.976590   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:28.976596   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:28.976644   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:29.011334   84547 cri.go:89] found id: ""
	I1210 00:02:29.011357   84547 logs.go:282] 0 containers: []
	W1210 00:02:29.011364   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:29.011372   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:29.011384   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:29.061418   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:29.061464   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:29.074887   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:29.074913   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:29.147240   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:29.147261   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:29.147272   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:29.225058   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:29.225094   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:27.197730   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:29.197818   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:31.198788   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:29.256802   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:31.258194   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:30.490822   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:32.990271   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:31.763432   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:31.777257   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:31.777359   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:31.812558   84547 cri.go:89] found id: ""
	I1210 00:02:31.812588   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.812598   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:31.812606   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:31.812668   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:31.847042   84547 cri.go:89] found id: ""
	I1210 00:02:31.847065   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.847082   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:31.847088   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:31.847135   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:31.881182   84547 cri.go:89] found id: ""
	I1210 00:02:31.881208   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.881216   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:31.881221   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:31.881272   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:31.917367   84547 cri.go:89] found id: ""
	I1210 00:02:31.917393   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.917401   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:31.917407   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:31.917454   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:31.956836   84547 cri.go:89] found id: ""
	I1210 00:02:31.956868   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.956883   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:31.956893   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:31.956952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:31.993125   84547 cri.go:89] found id: ""
	I1210 00:02:31.993151   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.993160   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:31.993168   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:31.993225   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:32.031648   84547 cri.go:89] found id: ""
	I1210 00:02:32.031679   84547 logs.go:282] 0 containers: []
	W1210 00:02:32.031687   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:32.031692   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:32.031746   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:32.065894   84547 cri.go:89] found id: ""
	I1210 00:02:32.065923   84547 logs.go:282] 0 containers: []
	W1210 00:02:32.065930   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:32.065941   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:32.065957   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:32.133473   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:32.133496   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:32.133508   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:32.213129   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:32.213161   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:32.251424   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:32.251453   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:32.302284   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:32.302323   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:34.815963   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:34.829460   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:34.829543   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:34.865811   84547 cri.go:89] found id: ""
	I1210 00:02:34.865837   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.865847   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:34.865854   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:34.865916   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:34.899185   84547 cri.go:89] found id: ""
	I1210 00:02:34.899211   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.899220   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:34.899227   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:34.899289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:34.932472   84547 cri.go:89] found id: ""
	I1210 00:02:34.932500   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.932509   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:34.932517   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:34.932581   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:34.965817   84547 cri.go:89] found id: ""
	I1210 00:02:34.965846   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.965857   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:34.965866   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:34.965930   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:35.000036   84547 cri.go:89] found id: ""
	I1210 00:02:35.000066   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.000077   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:35.000084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:35.000139   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:35.033808   84547 cri.go:89] found id: ""
	I1210 00:02:35.033839   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.033850   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:35.033857   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:35.033916   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:35.066242   84547 cri.go:89] found id: ""
	I1210 00:02:35.066269   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.066278   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:35.066285   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:35.066349   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:35.100740   84547 cri.go:89] found id: ""
	I1210 00:02:35.100763   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.100771   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:35.100779   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:35.100792   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:35.155483   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:35.155520   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:35.168910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:35.168939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:35.243234   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:35.243252   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:35.243263   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:35.320622   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:35.320657   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:33.698543   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:36.198250   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:33.756108   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:35.757682   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:34.990969   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:37.495166   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:37.855684   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:37.869056   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:37.869156   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:37.903720   84547 cri.go:89] found id: ""
	I1210 00:02:37.903748   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.903759   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:37.903766   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:37.903859   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:37.937741   84547 cri.go:89] found id: ""
	I1210 00:02:37.937780   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.937791   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:37.937808   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:37.937869   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:37.975255   84547 cri.go:89] found id: ""
	I1210 00:02:37.975281   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.975292   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:37.975299   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:37.975359   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:38.017348   84547 cri.go:89] found id: ""
	I1210 00:02:38.017381   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.017393   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:38.017400   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:38.017460   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:38.051039   84547 cri.go:89] found id: ""
	I1210 00:02:38.051065   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.051073   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:38.051079   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:38.051129   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:38.085752   84547 cri.go:89] found id: ""
	I1210 00:02:38.085782   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.085791   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:38.085799   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:38.085858   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:38.119975   84547 cri.go:89] found id: ""
	I1210 00:02:38.120004   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.120014   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:38.120021   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:38.120086   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:38.161467   84547 cri.go:89] found id: ""
	I1210 00:02:38.161499   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.161526   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:38.161537   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:38.161551   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:38.222277   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:38.222314   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:38.239300   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:38.239332   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:38.308997   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:38.309016   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:38.309032   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:38.394064   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:38.394108   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:38.198484   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:40.697916   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:38.257723   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:40.756530   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:39.990445   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:41.993952   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:40.933406   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:40.945862   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:40.945937   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:40.979503   84547 cri.go:89] found id: ""
	I1210 00:02:40.979532   84547 logs.go:282] 0 containers: []
	W1210 00:02:40.979540   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:40.979545   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:40.979619   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:41.016758   84547 cri.go:89] found id: ""
	I1210 00:02:41.016792   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.016803   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:41.016811   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:41.016873   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:41.053562   84547 cri.go:89] found id: ""
	I1210 00:02:41.053593   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.053601   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:41.053607   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:41.053667   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:41.086713   84547 cri.go:89] found id: ""
	I1210 00:02:41.086745   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.086757   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:41.086767   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:41.086830   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:41.122901   84547 cri.go:89] found id: ""
	I1210 00:02:41.122935   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.122945   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:41.122952   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:41.123011   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:41.156306   84547 cri.go:89] found id: ""
	I1210 00:02:41.156337   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.156355   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:41.156362   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:41.156423   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:41.189840   84547 cri.go:89] found id: ""
	I1210 00:02:41.189871   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.189882   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:41.189890   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:41.189946   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:41.223019   84547 cri.go:89] found id: ""
	I1210 00:02:41.223051   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.223061   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:41.223072   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:41.223088   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:41.275608   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:41.275640   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:41.289181   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:41.289210   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:41.358375   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:41.358404   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:41.358420   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:41.440214   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:41.440250   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:43.980600   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:43.993110   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:43.993165   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:44.026688   84547 cri.go:89] found id: ""
	I1210 00:02:44.026721   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.026732   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:44.026741   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:44.026796   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:44.062914   84547 cri.go:89] found id: ""
	I1210 00:02:44.062936   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.062943   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:44.062948   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:44.062999   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:44.105974   84547 cri.go:89] found id: ""
	I1210 00:02:44.106001   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.106009   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:44.106014   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:44.106061   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:44.140239   84547 cri.go:89] found id: ""
	I1210 00:02:44.140265   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.140274   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:44.140280   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:44.140338   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:44.175754   84547 cri.go:89] found id: ""
	I1210 00:02:44.175785   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.175796   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:44.175803   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:44.175870   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:44.211663   84547 cri.go:89] found id: ""
	I1210 00:02:44.211694   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.211705   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:44.211712   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:44.211776   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:44.244796   84547 cri.go:89] found id: ""
	I1210 00:02:44.244821   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.244831   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:44.244837   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:44.244898   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:44.282491   84547 cri.go:89] found id: ""
	I1210 00:02:44.282515   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.282528   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:44.282549   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:44.282562   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:44.335284   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:44.335328   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:44.349489   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:44.349530   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:44.418643   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:44.418668   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:44.418682   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:44.493901   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:44.493932   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:43.197494   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:45.198341   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:43.256947   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:45.257225   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:47.258073   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:44.491413   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:46.990521   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:48.991847   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:47.033132   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:47.046322   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:47.046403   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:47.078490   84547 cri.go:89] found id: ""
	I1210 00:02:47.078521   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.078533   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:47.078541   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:47.078602   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:47.111457   84547 cri.go:89] found id: ""
	I1210 00:02:47.111479   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.111487   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:47.111492   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:47.111538   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:47.144643   84547 cri.go:89] found id: ""
	I1210 00:02:47.144678   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.144689   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:47.144696   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:47.144757   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:47.178106   84547 cri.go:89] found id: ""
	I1210 00:02:47.178131   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.178141   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:47.178148   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:47.178213   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:47.215670   84547 cri.go:89] found id: ""
	I1210 00:02:47.215697   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.215712   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:47.215718   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:47.215767   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:47.248916   84547 cri.go:89] found id: ""
	I1210 00:02:47.248941   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.248948   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:47.248953   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:47.249002   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:47.287632   84547 cri.go:89] found id: ""
	I1210 00:02:47.287660   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.287671   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:47.287680   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:47.287745   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:47.327064   84547 cri.go:89] found id: ""
	I1210 00:02:47.327094   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.327103   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:47.327112   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:47.327126   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:47.341132   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:47.341176   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:47.417100   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:47.417121   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:47.417134   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:47.502612   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:47.502648   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:47.541312   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:47.541339   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:50.095403   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:50.108145   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:50.108202   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:50.140424   84547 cri.go:89] found id: ""
	I1210 00:02:50.140451   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.140462   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:50.140472   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:50.140532   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:50.173836   84547 cri.go:89] found id: ""
	I1210 00:02:50.173859   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.173866   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:50.173872   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:50.173928   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:50.213916   84547 cri.go:89] found id: ""
	I1210 00:02:50.213937   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.213944   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:50.213949   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:50.213997   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:50.246854   84547 cri.go:89] found id: ""
	I1210 00:02:50.246889   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.246899   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:50.246907   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:50.246956   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:50.281416   84547 cri.go:89] found id: ""
	I1210 00:02:50.281448   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.281456   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:50.281462   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:50.281511   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:50.313263   84547 cri.go:89] found id: ""
	I1210 00:02:50.313296   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.313308   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:50.313318   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:50.313385   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:50.347421   84547 cri.go:89] found id: ""
	I1210 00:02:50.347453   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.347463   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:50.347470   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:50.347544   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:50.383111   84547 cri.go:89] found id: ""
	I1210 00:02:50.383134   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.383142   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:50.383151   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:50.383162   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:50.421982   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:50.422013   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:50.475478   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:50.475520   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:50.489202   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:50.489256   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:02:47.199043   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:49.698055   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:51.698813   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:49.756585   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:51.757407   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:51.489831   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:53.490572   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	W1210 00:02:50.559501   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:50.559539   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:50.559552   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:53.136042   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:53.149149   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:53.149227   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:53.189323   84547 cri.go:89] found id: ""
	I1210 00:02:53.189349   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.189357   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:53.189365   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:53.189425   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:53.225235   84547 cri.go:89] found id: ""
	I1210 00:02:53.225269   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.225281   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:53.225288   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:53.225347   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:53.263465   84547 cri.go:89] found id: ""
	I1210 00:02:53.263492   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.263502   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:53.263510   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:53.263597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:53.297544   84547 cri.go:89] found id: ""
	I1210 00:02:53.297571   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.297583   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:53.297591   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:53.297656   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:53.331731   84547 cri.go:89] found id: ""
	I1210 00:02:53.331755   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.331762   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:53.331767   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:53.331815   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:53.367395   84547 cri.go:89] found id: ""
	I1210 00:02:53.367427   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.367440   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:53.367447   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:53.367511   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:53.403297   84547 cri.go:89] found id: ""
	I1210 00:02:53.403324   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.403332   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:53.403338   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:53.403398   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:53.437126   84547 cri.go:89] found id: ""
	I1210 00:02:53.437150   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.437158   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:53.437166   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:53.437177   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:53.489875   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:53.489914   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:53.503915   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:53.503940   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:53.578086   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:53.578114   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:53.578127   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:53.658463   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:53.658501   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:53.699332   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:56.197941   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:54.257053   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:56.258352   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:55.990548   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:58.489947   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:56.197093   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:56.211959   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:56.212020   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:56.250462   84547 cri.go:89] found id: ""
	I1210 00:02:56.250484   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.250493   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:56.250498   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:56.250552   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:56.286828   84547 cri.go:89] found id: ""
	I1210 00:02:56.286854   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.286865   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:56.286872   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:56.286939   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:56.320750   84547 cri.go:89] found id: ""
	I1210 00:02:56.320779   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.320787   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:56.320793   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:56.320840   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:56.355903   84547 cri.go:89] found id: ""
	I1210 00:02:56.355943   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.355954   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:56.355960   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:56.356026   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:56.395979   84547 cri.go:89] found id: ""
	I1210 00:02:56.396007   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.396018   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:56.396025   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:56.396081   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:56.431481   84547 cri.go:89] found id: ""
	I1210 00:02:56.431506   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.431514   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:56.431520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:56.431594   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:56.470318   84547 cri.go:89] found id: ""
	I1210 00:02:56.470349   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.470356   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:56.470361   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:56.470410   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:56.506388   84547 cri.go:89] found id: ""
	I1210 00:02:56.506411   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.506418   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:56.506429   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:56.506441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:56.558438   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:56.558483   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:56.571981   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:56.572004   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:56.634665   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:56.634697   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:56.634713   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:56.715663   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:56.715697   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:59.255305   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:59.268846   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:59.268930   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:59.302334   84547 cri.go:89] found id: ""
	I1210 00:02:59.302359   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.302366   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:59.302372   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:59.302426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:59.336380   84547 cri.go:89] found id: ""
	I1210 00:02:59.336409   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.336419   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:59.336425   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:59.336492   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:59.370179   84547 cri.go:89] found id: ""
	I1210 00:02:59.370201   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.370210   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:59.370214   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:59.370268   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:59.403199   84547 cri.go:89] found id: ""
	I1210 00:02:59.403222   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.403229   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:59.403236   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:59.403307   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:59.438650   84547 cri.go:89] found id: ""
	I1210 00:02:59.438673   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.438681   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:59.438686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:59.438736   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:59.473166   84547 cri.go:89] found id: ""
	I1210 00:02:59.473191   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.473199   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:59.473205   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:59.473264   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:59.506857   84547 cri.go:89] found id: ""
	I1210 00:02:59.506879   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.506888   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:59.506902   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:59.506963   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:59.551488   84547 cri.go:89] found id: ""
	I1210 00:02:59.551515   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.551526   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:59.551542   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:59.551557   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:59.605032   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:59.605069   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:59.619238   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:59.619271   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:59.690772   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:59.690798   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:59.690813   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:59.774424   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:59.774460   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:58.198128   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:00.698106   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:58.756596   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:01.257970   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:00.490268   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:02.990951   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:02.315240   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:02.329636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:02.329728   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:02.368571   84547 cri.go:89] found id: ""
	I1210 00:03:02.368599   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.368609   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:02.368621   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:02.368687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:02.406107   84547 cri.go:89] found id: ""
	I1210 00:03:02.406136   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.406148   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:02.406155   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:02.406219   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:02.443050   84547 cri.go:89] found id: ""
	I1210 00:03:02.443077   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.443091   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:02.443098   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:02.443146   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:02.484425   84547 cri.go:89] found id: ""
	I1210 00:03:02.484451   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.484461   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:02.484469   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:02.484536   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:02.525624   84547 cri.go:89] found id: ""
	I1210 00:03:02.525647   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.525655   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:02.525661   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:02.525711   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:02.564808   84547 cri.go:89] found id: ""
	I1210 00:03:02.564839   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.564850   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:02.564856   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:02.564907   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:02.598322   84547 cri.go:89] found id: ""
	I1210 00:03:02.598346   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.598354   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:02.598359   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:02.598417   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:02.632256   84547 cri.go:89] found id: ""
	I1210 00:03:02.632310   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.632322   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:02.632334   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:02.632348   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:02.686025   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:02.686063   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:02.701214   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:02.701237   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:02.773453   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:02.773477   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:02.773491   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:02.862017   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:02.862068   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:05.401014   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:05.415009   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:05.415072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:05.454916   84547 cri.go:89] found id: ""
	I1210 00:03:05.454945   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.454955   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:05.454962   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:05.455023   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:05.492955   84547 cri.go:89] found id: ""
	I1210 00:03:05.492981   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.492988   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:05.492995   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:05.493046   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:02.698652   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.196840   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:03.755858   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.757395   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.489875   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:07.990053   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.539646   84547 cri.go:89] found id: ""
	I1210 00:03:05.539670   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.539683   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:05.539690   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:05.539755   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:05.574534   84547 cri.go:89] found id: ""
	I1210 00:03:05.574559   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.574567   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:05.574572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:05.574632   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:05.608141   84547 cri.go:89] found id: ""
	I1210 00:03:05.608166   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.608174   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:05.608180   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:05.608243   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:05.643725   84547 cri.go:89] found id: ""
	I1210 00:03:05.643751   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.643759   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:05.643765   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:05.643812   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:05.677730   84547 cri.go:89] found id: ""
	I1210 00:03:05.677760   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.677772   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:05.677779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:05.677846   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:05.712475   84547 cri.go:89] found id: ""
	I1210 00:03:05.712507   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.712519   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:05.712531   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:05.712548   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:05.764532   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:05.764569   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:05.777910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:05.777938   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:05.851070   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:05.851099   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:05.851114   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:05.929518   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:05.929554   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:08.465166   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:08.478666   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:08.478723   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:08.513763   84547 cri.go:89] found id: ""
	I1210 00:03:08.513795   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.513808   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:08.513816   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:08.513877   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:08.548272   84547 cri.go:89] found id: ""
	I1210 00:03:08.548299   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.548306   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:08.548313   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:08.548397   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:08.581773   84547 cri.go:89] found id: ""
	I1210 00:03:08.581801   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.581812   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:08.581820   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:08.581884   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:08.613625   84547 cri.go:89] found id: ""
	I1210 00:03:08.613655   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.613664   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:08.613669   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:08.613716   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:08.644618   84547 cri.go:89] found id: ""
	I1210 00:03:08.644644   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.644652   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:08.644658   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:08.644717   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:08.676840   84547 cri.go:89] found id: ""
	I1210 00:03:08.676867   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.676879   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:08.676887   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:08.676952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:08.709918   84547 cri.go:89] found id: ""
	I1210 00:03:08.709952   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.709961   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:08.709969   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:08.710029   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:08.740842   84547 cri.go:89] found id: ""
	I1210 00:03:08.740871   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.740882   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:08.740893   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:08.740907   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:08.808679   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:08.808705   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:08.808721   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:08.884692   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:08.884733   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:08.920922   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:08.920952   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:08.971404   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:08.971441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:07.197548   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:09.197749   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:11.198894   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:08.258477   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:10.755482   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:10.490495   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:12.990709   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:11.484905   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:11.498791   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:11.498865   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:11.535784   84547 cri.go:89] found id: ""
	I1210 00:03:11.535809   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.535820   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:11.535827   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:11.535890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:11.567981   84547 cri.go:89] found id: ""
	I1210 00:03:11.568006   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.568019   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:11.568026   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:11.568083   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:11.601188   84547 cri.go:89] found id: ""
	I1210 00:03:11.601213   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.601224   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:11.601231   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:11.601289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:11.635759   84547 cri.go:89] found id: ""
	I1210 00:03:11.635787   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.635798   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:11.635807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:11.635867   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:11.675512   84547 cri.go:89] found id: ""
	I1210 00:03:11.675537   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.675547   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:11.675554   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:11.675638   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:11.709889   84547 cri.go:89] found id: ""
	I1210 00:03:11.709912   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.709922   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:11.709929   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:11.709987   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:11.744571   84547 cri.go:89] found id: ""
	I1210 00:03:11.744603   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.744610   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:11.744616   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:11.744677   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:11.779091   84547 cri.go:89] found id: ""
	I1210 00:03:11.779123   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.779136   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:11.779146   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:11.779156   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:11.830682   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:11.830726   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:11.844352   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:11.844383   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:11.915025   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:11.915046   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:11.915058   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:11.998581   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:11.998620   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:14.536408   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:14.550250   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:14.550321   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:14.586005   84547 cri.go:89] found id: ""
	I1210 00:03:14.586037   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.586049   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:14.586057   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:14.586118   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:14.620124   84547 cri.go:89] found id: ""
	I1210 00:03:14.620159   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.620170   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:14.620178   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:14.620231   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:14.655079   84547 cri.go:89] found id: ""
	I1210 00:03:14.655104   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.655112   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:14.655117   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:14.655178   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:14.688253   84547 cri.go:89] found id: ""
	I1210 00:03:14.688285   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.688298   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:14.688308   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:14.688371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:14.726904   84547 cri.go:89] found id: ""
	I1210 00:03:14.726932   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.726940   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:14.726945   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:14.726994   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:14.764836   84547 cri.go:89] found id: ""
	I1210 00:03:14.764868   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.764881   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:14.764890   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:14.764955   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:14.803557   84547 cri.go:89] found id: ""
	I1210 00:03:14.803605   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.803616   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:14.803621   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:14.803674   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:14.841070   84547 cri.go:89] found id: ""
	I1210 00:03:14.841102   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.841122   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:14.841137   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:14.841161   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:14.907607   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:14.907631   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:14.907644   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:14.985179   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:14.985209   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:15.022654   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:15.022687   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:15.075224   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:15.075260   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:13.697072   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:15.697892   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:12.757232   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:14.758529   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:17.256541   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:15.490871   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:17.990140   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:17.589836   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:17.604045   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:17.604102   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:17.643211   84547 cri.go:89] found id: ""
	I1210 00:03:17.643241   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.643251   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:17.643260   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:17.643320   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:17.675436   84547 cri.go:89] found id: ""
	I1210 00:03:17.675462   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.675472   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:17.675479   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:17.675538   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:17.709428   84547 cri.go:89] found id: ""
	I1210 00:03:17.709465   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.709476   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:17.709484   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:17.709544   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:17.764088   84547 cri.go:89] found id: ""
	I1210 00:03:17.764121   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.764132   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:17.764139   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:17.764200   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:17.797908   84547 cri.go:89] found id: ""
	I1210 00:03:17.797933   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.797944   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:17.797953   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:17.798016   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:17.829580   84547 cri.go:89] found id: ""
	I1210 00:03:17.829609   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.829620   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:17.829628   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:17.829687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:17.865106   84547 cri.go:89] found id: ""
	I1210 00:03:17.865136   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.865145   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:17.865150   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:17.865200   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:17.896757   84547 cri.go:89] found id: ""
	I1210 00:03:17.896791   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.896803   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:17.896814   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:17.896830   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:17.977210   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:17.977244   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:18.016843   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:18.016867   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:18.065263   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:18.065294   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:18.078188   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:18.078212   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:18.148164   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:17.698675   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:20.199732   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:19.755949   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:21.756959   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:19.990714   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:21.990766   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:23.991443   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:23.991468   83900 pod_ready.go:82] duration metric: took 4m0.007840011s for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	E1210 00:03:23.991477   83900 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1210 00:03:23.991484   83900 pod_ready.go:39] duration metric: took 4m6.196076299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:03:23.991501   83900 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:03:23.991534   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:23.991610   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:24.032198   83900 cri.go:89] found id: "07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:24.032227   83900 cri.go:89] found id: ""
	I1210 00:03:24.032237   83900 logs.go:282] 1 containers: [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7]
	I1210 00:03:24.032303   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.037671   83900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:24.037746   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:24.076467   83900 cri.go:89] found id: "c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:24.076499   83900 cri.go:89] found id: ""
	I1210 00:03:24.076507   83900 logs.go:282] 1 containers: [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9]
	I1210 00:03:24.076557   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.081125   83900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:24.081193   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:24.125433   83900 cri.go:89] found id: "db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:24.125472   83900 cri.go:89] found id: ""
	I1210 00:03:24.125483   83900 logs.go:282] 1 containers: [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71]
	I1210 00:03:24.125542   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.131023   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:24.131097   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:24.173215   83900 cri.go:89] found id: "f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:24.173239   83900 cri.go:89] found id: ""
	I1210 00:03:24.173247   83900 logs.go:282] 1 containers: [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de]
	I1210 00:03:24.173303   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.179027   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:24.179108   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:24.228905   83900 cri.go:89] found id: "a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:24.228936   83900 cri.go:89] found id: ""
	I1210 00:03:24.228946   83900 logs.go:282] 1 containers: [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994]
	I1210 00:03:24.229004   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.234441   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:24.234520   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:20.648921   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:20.661458   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:20.661516   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:20.693473   84547 cri.go:89] found id: ""
	I1210 00:03:20.693509   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.693518   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:20.693524   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:20.693576   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:20.726279   84547 cri.go:89] found id: ""
	I1210 00:03:20.726301   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.726309   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:20.726314   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:20.726375   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:20.759895   84547 cri.go:89] found id: ""
	I1210 00:03:20.759922   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.759931   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:20.759936   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:20.759988   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:20.793934   84547 cri.go:89] found id: ""
	I1210 00:03:20.793964   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.793974   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:20.793982   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:20.794049   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:20.828040   84547 cri.go:89] found id: ""
	I1210 00:03:20.828066   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.828077   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:20.828084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:20.828143   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:20.861917   84547 cri.go:89] found id: ""
	I1210 00:03:20.861950   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.861960   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:20.861967   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:20.862028   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:20.896458   84547 cri.go:89] found id: ""
	I1210 00:03:20.896481   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.896489   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:20.896494   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:20.896551   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:20.931014   84547 cri.go:89] found id: ""
	I1210 00:03:20.931044   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.931052   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:20.931061   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:20.931072   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:20.968693   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:20.968718   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:21.021880   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:21.021917   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:21.035848   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:21.035881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:21.104570   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:21.104604   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:21.104621   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:23.679447   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:23.692326   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:23.692426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:23.729773   84547 cri.go:89] found id: ""
	I1210 00:03:23.729796   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.729804   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:23.729809   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:23.729855   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:23.765875   84547 cri.go:89] found id: ""
	I1210 00:03:23.765905   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.765915   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:23.765922   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:23.765984   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:23.800785   84547 cri.go:89] found id: ""
	I1210 00:03:23.800821   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.800831   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:23.800838   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:23.800902   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:23.838137   84547 cri.go:89] found id: ""
	I1210 00:03:23.838160   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.838168   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:23.838173   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:23.838222   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:23.869916   84547 cri.go:89] found id: ""
	I1210 00:03:23.869947   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.869958   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:23.869966   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:23.870027   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:23.903939   84547 cri.go:89] found id: ""
	I1210 00:03:23.903962   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.903969   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:23.903975   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:23.904021   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:23.937092   84547 cri.go:89] found id: ""
	I1210 00:03:23.937119   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.937127   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:23.937133   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:23.937194   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:23.970562   84547 cri.go:89] found id: ""
	I1210 00:03:23.970599   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.970611   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:23.970622   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:23.970641   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:23.983364   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:23.983394   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:24.066624   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:24.066645   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:24.066657   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:24.151466   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:24.151502   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:24.204590   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:24.204615   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:22.698354   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:24.698462   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:24.256922   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:26.755965   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:24.279840   83900 cri.go:89] found id: "a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:24.279859   83900 cri.go:89] found id: ""
	I1210 00:03:24.279866   83900 logs.go:282] 1 containers: [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538]
	I1210 00:03:24.279922   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.283898   83900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:24.283972   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:24.319541   83900 cri.go:89] found id: ""
	I1210 00:03:24.319598   83900 logs.go:282] 0 containers: []
	W1210 00:03:24.319612   83900 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:24.319624   83900 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:03:24.319686   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:03:24.356205   83900 cri.go:89] found id: "b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:24.356228   83900 cri.go:89] found id: "e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:24.356234   83900 cri.go:89] found id: ""
	I1210 00:03:24.356242   83900 logs.go:282] 2 containers: [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1]
	I1210 00:03:24.356302   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.361772   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.366656   83900 logs.go:123] Gathering logs for kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] ...
	I1210 00:03:24.366690   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:24.408922   83900 logs.go:123] Gathering logs for kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] ...
	I1210 00:03:24.408955   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:24.466720   83900 logs.go:123] Gathering logs for storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] ...
	I1210 00:03:24.466762   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:24.500254   83900 logs.go:123] Gathering logs for storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] ...
	I1210 00:03:24.500290   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:24.535025   83900 logs.go:123] Gathering logs for container status ...
	I1210 00:03:24.535058   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:24.575706   83900 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:24.575732   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:24.645227   83900 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:24.645265   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:24.658833   83900 logs.go:123] Gathering logs for kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] ...
	I1210 00:03:24.658878   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:24.702076   83900 logs.go:123] Gathering logs for kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] ...
	I1210 00:03:24.702107   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:24.738869   83900 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:24.738900   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:25.204715   83900 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:25.204753   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:03:25.320267   83900 logs.go:123] Gathering logs for etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] ...
	I1210 00:03:25.320315   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:25.370599   83900 logs.go:123] Gathering logs for coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] ...
	I1210 00:03:25.370635   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:27.910863   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:27.925070   83900 api_server.go:72] duration metric: took 4m17.856306845s to wait for apiserver process to appear ...
	I1210 00:03:27.925098   83900 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:03:27.925164   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:27.925227   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:27.959991   83900 cri.go:89] found id: "07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:27.960017   83900 cri.go:89] found id: ""
	I1210 00:03:27.960026   83900 logs.go:282] 1 containers: [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7]
	I1210 00:03:27.960081   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:27.964069   83900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:27.964128   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:27.997618   83900 cri.go:89] found id: "c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:27.997640   83900 cri.go:89] found id: ""
	I1210 00:03:27.997650   83900 logs.go:282] 1 containers: [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9]
	I1210 00:03:27.997710   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.001497   83900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:28.001570   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:28.034858   83900 cri.go:89] found id: "db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:28.034883   83900 cri.go:89] found id: ""
	I1210 00:03:28.034891   83900 logs.go:282] 1 containers: [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71]
	I1210 00:03:28.034953   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.038775   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:28.038852   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:28.074473   83900 cri.go:89] found id: "f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:28.074497   83900 cri.go:89] found id: ""
	I1210 00:03:28.074506   83900 logs.go:282] 1 containers: [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de]
	I1210 00:03:28.074568   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.079082   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:28.079149   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:28.113948   83900 cri.go:89] found id: "a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:28.113971   83900 cri.go:89] found id: ""
	I1210 00:03:28.113981   83900 logs.go:282] 1 containers: [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994]
	I1210 00:03:28.114045   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.118930   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:28.118982   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:28.162794   83900 cri.go:89] found id: "a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:28.162820   83900 cri.go:89] found id: ""
	I1210 00:03:28.162827   83900 logs.go:282] 1 containers: [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538]
	I1210 00:03:28.162872   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.166637   83900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:28.166715   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:28.206591   83900 cri.go:89] found id: ""
	I1210 00:03:28.206619   83900 logs.go:282] 0 containers: []
	W1210 00:03:28.206627   83900 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:28.206633   83900 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:03:28.206693   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:03:28.251722   83900 cri.go:89] found id: "b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:28.251748   83900 cri.go:89] found id: "e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:28.251754   83900 cri.go:89] found id: ""
	I1210 00:03:28.251763   83900 logs.go:282] 2 containers: [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1]
	I1210 00:03:28.251821   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.256785   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.260355   83900 logs.go:123] Gathering logs for storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] ...
	I1210 00:03:28.260378   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:28.295801   83900 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:28.295827   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:28.734920   83900 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:28.734963   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:28.804266   83900 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:28.804305   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:28.818223   83900 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:28.818251   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:03:28.931008   83900 logs.go:123] Gathering logs for etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] ...
	I1210 00:03:28.931044   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:28.977544   83900 logs.go:123] Gathering logs for coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] ...
	I1210 00:03:28.977592   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:29.016645   83900 logs.go:123] Gathering logs for kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] ...
	I1210 00:03:29.016679   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:29.052920   83900 logs.go:123] Gathering logs for kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] ...
	I1210 00:03:29.052945   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:29.094464   83900 logs.go:123] Gathering logs for kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] ...
	I1210 00:03:29.094497   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:29.132520   83900 logs.go:123] Gathering logs for kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] ...
	I1210 00:03:29.132545   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:29.180364   83900 logs.go:123] Gathering logs for storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] ...
	I1210 00:03:29.180396   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:29.216627   83900 logs.go:123] Gathering logs for container status ...
	I1210 00:03:29.216657   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:26.774169   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:26.786888   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:26.786964   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:26.820391   84547 cri.go:89] found id: ""
	I1210 00:03:26.820423   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.820433   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:26.820441   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:26.820504   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:26.853927   84547 cri.go:89] found id: ""
	I1210 00:03:26.853955   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.853963   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:26.853971   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:26.854031   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:26.887309   84547 cri.go:89] found id: ""
	I1210 00:03:26.887337   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.887347   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:26.887353   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:26.887415   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:26.923869   84547 cri.go:89] found id: ""
	I1210 00:03:26.923897   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.923908   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:26.923915   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:26.923983   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:26.959919   84547 cri.go:89] found id: ""
	I1210 00:03:26.959952   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.959964   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:26.959971   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:26.960032   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:26.993742   84547 cri.go:89] found id: ""
	I1210 00:03:26.993775   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.993787   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:26.993794   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:26.993853   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:27.026977   84547 cri.go:89] found id: ""
	I1210 00:03:27.027011   84547 logs.go:282] 0 containers: []
	W1210 00:03:27.027021   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:27.027028   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:27.027090   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:27.064135   84547 cri.go:89] found id: ""
	I1210 00:03:27.064164   84547 logs.go:282] 0 containers: []
	W1210 00:03:27.064173   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:27.064181   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:27.064192   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:27.101758   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:27.101784   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:27.155536   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:27.155592   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:27.168891   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:27.168914   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:27.242486   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:27.242511   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:27.242525   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:29.821740   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:29.834536   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:29.834604   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:29.868316   84547 cri.go:89] found id: ""
	I1210 00:03:29.868341   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.868348   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:29.868354   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:29.868402   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:29.902290   84547 cri.go:89] found id: ""
	I1210 00:03:29.902320   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.902331   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:29.902338   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:29.902399   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:29.948750   84547 cri.go:89] found id: ""
	I1210 00:03:29.948780   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.948792   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:29.948800   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:29.948864   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:29.994587   84547 cri.go:89] found id: ""
	I1210 00:03:29.994618   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.994629   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:29.994636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:29.994694   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:30.042216   84547 cri.go:89] found id: ""
	I1210 00:03:30.042243   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.042256   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:30.042264   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:30.042345   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:30.075964   84547 cri.go:89] found id: ""
	I1210 00:03:30.075989   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.075999   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:30.076007   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:30.076072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:30.108647   84547 cri.go:89] found id: ""
	I1210 00:03:30.108676   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.108687   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:30.108695   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:30.108760   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:30.140983   84547 cri.go:89] found id: ""
	I1210 00:03:30.141013   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.141022   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:30.141030   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:30.141040   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:30.198281   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:30.198319   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:30.213820   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:30.213849   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:30.283907   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:30.283927   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:30.283940   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:30.360731   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:30.360768   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:27.197512   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:29.698238   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:31.698679   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:28.757444   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:31.255481   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:31.759061   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1210 00:03:31.763064   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 200:
	ok
	I1210 00:03:31.763974   83900 api_server.go:141] control plane version: v1.31.2
	I1210 00:03:31.763992   83900 api_server.go:131] duration metric: took 3.83888731s to wait for apiserver health ...
	I1210 00:03:31.763999   83900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:03:31.764021   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:31.764077   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:31.798282   83900 cri.go:89] found id: "07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:31.798312   83900 cri.go:89] found id: ""
	I1210 00:03:31.798320   83900 logs.go:282] 1 containers: [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7]
	I1210 00:03:31.798375   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.802661   83900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:31.802726   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:31.837861   83900 cri.go:89] found id: "c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:31.837888   83900 cri.go:89] found id: ""
	I1210 00:03:31.837896   83900 logs.go:282] 1 containers: [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9]
	I1210 00:03:31.837943   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.842139   83900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:31.842224   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:31.876940   83900 cri.go:89] found id: "db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:31.876967   83900 cri.go:89] found id: ""
	I1210 00:03:31.876977   83900 logs.go:282] 1 containers: [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71]
	I1210 00:03:31.877043   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.881416   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:31.881491   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:31.915449   83900 cri.go:89] found id: "f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:31.915481   83900 cri.go:89] found id: ""
	I1210 00:03:31.915491   83900 logs.go:282] 1 containers: [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de]
	I1210 00:03:31.915575   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.919342   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:31.919400   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:31.954554   83900 cri.go:89] found id: "a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:31.954583   83900 cri.go:89] found id: ""
	I1210 00:03:31.954592   83900 logs.go:282] 1 containers: [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994]
	I1210 00:03:31.954641   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.958484   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:31.958569   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:32.000361   83900 cri.go:89] found id: "a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:32.000390   83900 cri.go:89] found id: ""
	I1210 00:03:32.000401   83900 logs.go:282] 1 containers: [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538]
	I1210 00:03:32.000462   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:32.004339   83900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:32.004400   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:32.043231   83900 cri.go:89] found id: ""
	I1210 00:03:32.043254   83900 logs.go:282] 0 containers: []
	W1210 00:03:32.043279   83900 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:32.043285   83900 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:03:32.043334   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:03:32.079572   83900 cri.go:89] found id: "b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:32.079597   83900 cri.go:89] found id: "e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:32.079608   83900 cri.go:89] found id: ""
	I1210 00:03:32.079615   83900 logs.go:282] 2 containers: [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1]
	I1210 00:03:32.079661   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:32.083526   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:32.087101   83900 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:32.087126   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:32.494977   83900 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:32.495021   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:32.509299   83900 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:32.509342   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:03:32.612029   83900 logs.go:123] Gathering logs for kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] ...
	I1210 00:03:32.612066   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:32.656868   83900 logs.go:123] Gathering logs for kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] ...
	I1210 00:03:32.656900   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:32.695347   83900 logs.go:123] Gathering logs for storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] ...
	I1210 00:03:32.695376   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:32.732071   83900 logs.go:123] Gathering logs for storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] ...
	I1210 00:03:32.732100   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:32.767692   83900 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:32.767718   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:32.836047   83900 logs.go:123] Gathering logs for etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] ...
	I1210 00:03:32.836088   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:32.887136   83900 logs.go:123] Gathering logs for coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] ...
	I1210 00:03:32.887175   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:32.929836   83900 logs.go:123] Gathering logs for kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] ...
	I1210 00:03:32.929873   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:32.972459   83900 logs.go:123] Gathering logs for kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] ...
	I1210 00:03:32.972492   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:33.029387   83900 logs.go:123] Gathering logs for container status ...
	I1210 00:03:33.029415   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:32.904302   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:32.919123   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:32.919209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:32.952847   84547 cri.go:89] found id: ""
	I1210 00:03:32.952879   84547 logs.go:282] 0 containers: []
	W1210 00:03:32.952889   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:32.952897   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:32.952961   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:32.986993   84547 cri.go:89] found id: ""
	I1210 00:03:32.987020   84547 logs.go:282] 0 containers: []
	W1210 00:03:32.987029   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:32.987035   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:32.987085   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:33.027506   84547 cri.go:89] found id: ""
	I1210 00:03:33.027536   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.027548   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:33.027556   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:33.027630   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:33.061553   84547 cri.go:89] found id: ""
	I1210 00:03:33.061588   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.061605   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:33.061613   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:33.061673   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:33.097640   84547 cri.go:89] found id: ""
	I1210 00:03:33.097679   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.097693   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:33.097702   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:33.097783   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:33.131725   84547 cri.go:89] found id: ""
	I1210 00:03:33.131758   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.131768   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:33.131775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:33.131839   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:33.165731   84547 cri.go:89] found id: ""
	I1210 00:03:33.165759   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.165771   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:33.165779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:33.165841   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:33.200288   84547 cri.go:89] found id: ""
	I1210 00:03:33.200310   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.200320   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:33.200338   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:33.200351   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:33.251524   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:33.251577   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:33.266197   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:33.266223   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:33.343529   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:33.343583   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:33.343600   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:33.420133   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:33.420175   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:35.577482   83900 system_pods.go:59] 8 kube-system pods found
	I1210 00:03:35.577511   83900 system_pods.go:61] "coredns-7c65d6cfc9-qvtlr" [1ef5707d-5f06-46d0-809b-da79f79c49e5] Running
	I1210 00:03:35.577516   83900 system_pods.go:61] "etcd-embed-certs-825613" [3213929f-0200-4e7d-9b69-967463bfc194] Running
	I1210 00:03:35.577520   83900 system_pods.go:61] "kube-apiserver-embed-certs-825613" [d64b8c55-7da1-4b8e-864e-0727676b17a1] Running
	I1210 00:03:35.577524   83900 system_pods.go:61] "kube-controller-manager-embed-certs-825613" [61372146-3fe1-4f4e-9d8e-d85cc5ac8369] Running
	I1210 00:03:35.577527   83900 system_pods.go:61] "kube-proxy-rn6fg" [6db02558-bfa6-4c5f-a120-aed13575b273] Running
	I1210 00:03:35.577529   83900 system_pods.go:61] "kube-scheduler-embed-certs-825613" [b9c75ecd-add3-4a19-86e0-08326eea0f6b] Running
	I1210 00:03:35.577537   83900 system_pods.go:61] "metrics-server-6867b74b74-hg7c5" [2a657b1b-4435-42b5-aef2-deebf7865c83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:03:35.577542   83900 system_pods.go:61] "storage-provisioner" [e5cabae9-bb71-4b5e-9a43-0dce4a0733a1] Running
	I1210 00:03:35.577549   83900 system_pods.go:74] duration metric: took 3.81354476s to wait for pod list to return data ...
	I1210 00:03:35.577556   83900 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:03:35.579951   83900 default_sa.go:45] found service account: "default"
	I1210 00:03:35.579971   83900 default_sa.go:55] duration metric: took 2.410307ms for default service account to be created ...
	I1210 00:03:35.579977   83900 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:03:35.584102   83900 system_pods.go:86] 8 kube-system pods found
	I1210 00:03:35.584126   83900 system_pods.go:89] "coredns-7c65d6cfc9-qvtlr" [1ef5707d-5f06-46d0-809b-da79f79c49e5] Running
	I1210 00:03:35.584131   83900 system_pods.go:89] "etcd-embed-certs-825613" [3213929f-0200-4e7d-9b69-967463bfc194] Running
	I1210 00:03:35.584136   83900 system_pods.go:89] "kube-apiserver-embed-certs-825613" [d64b8c55-7da1-4b8e-864e-0727676b17a1] Running
	I1210 00:03:35.584142   83900 system_pods.go:89] "kube-controller-manager-embed-certs-825613" [61372146-3fe1-4f4e-9d8e-d85cc5ac8369] Running
	I1210 00:03:35.584148   83900 system_pods.go:89] "kube-proxy-rn6fg" [6db02558-bfa6-4c5f-a120-aed13575b273] Running
	I1210 00:03:35.584153   83900 system_pods.go:89] "kube-scheduler-embed-certs-825613" [b9c75ecd-add3-4a19-86e0-08326eea0f6b] Running
	I1210 00:03:35.584163   83900 system_pods.go:89] "metrics-server-6867b74b74-hg7c5" [2a657b1b-4435-42b5-aef2-deebf7865c83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:03:35.584168   83900 system_pods.go:89] "storage-provisioner" [e5cabae9-bb71-4b5e-9a43-0dce4a0733a1] Running
	I1210 00:03:35.584181   83900 system_pods.go:126] duration metric: took 4.196356ms to wait for k8s-apps to be running ...
	I1210 00:03:35.584192   83900 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:03:35.584235   83900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:03:35.598192   83900 system_svc.go:56] duration metric: took 13.993491ms WaitForService to wait for kubelet
	I1210 00:03:35.598218   83900 kubeadm.go:582] duration metric: took 4m25.529459505s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:03:35.598238   83900 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:03:35.600992   83900 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:03:35.601012   83900 node_conditions.go:123] node cpu capacity is 2
	I1210 00:03:35.601025   83900 node_conditions.go:105] duration metric: took 2.782415ms to run NodePressure ...
	I1210 00:03:35.601038   83900 start.go:241] waiting for startup goroutines ...
	I1210 00:03:35.601049   83900 start.go:246] waiting for cluster config update ...
	I1210 00:03:35.601063   83900 start.go:255] writing updated cluster config ...
	I1210 00:03:35.601360   83900 ssh_runner.go:195] Run: rm -f paused
	I1210 00:03:35.647487   83900 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:03:35.650508   83900 out.go:177] * Done! kubectl is now configured to use "embed-certs-825613" cluster and "default" namespace by default
	I1210 00:03:34.199682   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:36.696731   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:33.258255   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:35.756802   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:35.971411   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:35.988993   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:35.989059   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:36.022639   84547 cri.go:89] found id: ""
	I1210 00:03:36.022673   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.022684   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:36.022692   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:36.022758   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:36.055421   84547 cri.go:89] found id: ""
	I1210 00:03:36.055452   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.055461   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:36.055466   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:36.055514   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:36.087690   84547 cri.go:89] found id: ""
	I1210 00:03:36.087721   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.087731   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:36.087738   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:36.087802   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:36.125209   84547 cri.go:89] found id: ""
	I1210 00:03:36.125240   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.125249   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:36.125254   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:36.125304   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:36.157544   84547 cri.go:89] found id: ""
	I1210 00:03:36.157586   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.157599   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:36.157607   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:36.157676   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:36.193415   84547 cri.go:89] found id: ""
	I1210 00:03:36.193448   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.193459   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:36.193466   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:36.193525   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:36.228941   84547 cri.go:89] found id: ""
	I1210 00:03:36.228971   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.228982   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:36.228989   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:36.229052   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:36.264821   84547 cri.go:89] found id: ""
	I1210 00:03:36.264851   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.264862   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:36.264873   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:36.264887   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:36.314841   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:36.314881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:36.328664   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:36.328695   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:36.402929   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:36.402956   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:36.402970   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:36.480270   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:36.480306   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:39.022748   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:39.036323   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:39.036382   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:39.070582   84547 cri.go:89] found id: ""
	I1210 00:03:39.070608   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.070616   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:39.070622   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:39.070690   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:39.108783   84547 cri.go:89] found id: ""
	I1210 00:03:39.108814   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.108825   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:39.108832   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:39.108889   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:39.142300   84547 cri.go:89] found id: ""
	I1210 00:03:39.142330   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.142338   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:39.142344   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:39.142400   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:39.177859   84547 cri.go:89] found id: ""
	I1210 00:03:39.177891   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.177903   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:39.177911   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:39.177965   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:39.212109   84547 cri.go:89] found id: ""
	I1210 00:03:39.212140   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.212152   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:39.212160   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:39.212209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:39.245451   84547 cri.go:89] found id: ""
	I1210 00:03:39.245490   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.245501   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:39.245509   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:39.245569   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:39.278925   84547 cri.go:89] found id: ""
	I1210 00:03:39.278957   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.278967   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:39.278974   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:39.279063   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:39.312322   84547 cri.go:89] found id: ""
	I1210 00:03:39.312350   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.312358   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:39.312367   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:39.312377   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:39.362844   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:39.362882   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:39.375938   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:39.375967   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:39.443649   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:39.443676   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:39.443691   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:39.523722   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:39.523757   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:38.698105   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:41.198814   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:38.256567   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:40.257434   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:42.062317   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:42.075094   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:42.075157   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:42.106719   84547 cri.go:89] found id: ""
	I1210 00:03:42.106744   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.106751   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:42.106757   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:42.106805   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:42.141977   84547 cri.go:89] found id: ""
	I1210 00:03:42.142011   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.142022   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:42.142029   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:42.142089   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:42.175117   84547 cri.go:89] found id: ""
	I1210 00:03:42.175151   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.175164   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:42.175172   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:42.175243   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:42.209871   84547 cri.go:89] found id: ""
	I1210 00:03:42.209900   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.209911   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:42.209919   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:42.209982   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:42.252784   84547 cri.go:89] found id: ""
	I1210 00:03:42.252812   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.252822   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:42.252830   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:42.252891   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:42.285934   84547 cri.go:89] found id: ""
	I1210 00:03:42.285961   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.285973   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:42.285980   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:42.286040   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:42.327393   84547 cri.go:89] found id: ""
	I1210 00:03:42.327423   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.327433   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:42.327440   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:42.327498   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:42.362244   84547 cri.go:89] found id: ""
	I1210 00:03:42.362279   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.362289   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:42.362302   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:42.362317   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:42.441656   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:42.441691   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:42.479357   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:42.479397   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:42.533002   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:42.533038   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:42.546485   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:42.546514   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:42.616912   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:45.117156   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:45.131265   84547 kubeadm.go:597] duration metric: took 4m2.157458218s to restartPrimaryControlPlane
	W1210 00:03:45.131350   84547 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 00:03:45.131379   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:03:43.699026   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:46.197420   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:42.756686   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:44.251077   84259 pod_ready.go:82] duration metric: took 4m0.000996899s for pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace to be "Ready" ...
	E1210 00:03:44.251112   84259 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 00:03:44.251136   84259 pod_ready.go:39] duration metric: took 4m13.15350289s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:03:44.251170   84259 kubeadm.go:597] duration metric: took 4m23.227405415s to restartPrimaryControlPlane
	W1210 00:03:44.251238   84259 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 00:03:44.251368   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:03:46.778025   84547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.646620044s)
	I1210 00:03:46.778099   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:03:46.792384   84547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:03:46.802897   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:03:46.813393   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:03:46.813417   84547 kubeadm.go:157] found existing configuration files:
	
	I1210 00:03:46.813473   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:03:46.823016   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:03:46.823093   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:03:46.832912   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:03:46.841961   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:03:46.842019   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:03:46.851160   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:03:46.859879   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:03:46.859938   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:03:46.869192   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:03:46.878423   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:03:46.878487   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:03:46.888103   84547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:03:47.105146   84547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:03:48.698211   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:51.197463   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:53.198578   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:55.697251   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:57.698289   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:00.198291   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:02.696926   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:04.697431   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:06.698260   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:10.270952   84259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.019545181s)
	I1210 00:04:10.271025   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:04:10.292112   84259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:04:10.307538   84259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:04:10.325375   84259 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:04:10.325406   84259 kubeadm.go:157] found existing configuration files:
	
	I1210 00:04:10.325465   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 00:04:10.338892   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:04:10.338960   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:04:10.353888   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 00:04:10.364787   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:04:10.364852   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:04:10.374513   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 00:04:10.393430   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:04:10.393486   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:04:10.403621   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 00:04:10.412759   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:04:10.412824   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:04:10.422153   84259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:04:10.468789   84259 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 00:04:10.468852   84259 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:04:10.566712   84259 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:04:10.566840   84259 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:04:10.566939   84259 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 00:04:10.574253   84259 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:04:10.576376   84259 out.go:235]   - Generating certificates and keys ...
	I1210 00:04:10.576485   84259 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:04:10.576566   84259 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:04:10.576722   84259 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:04:10.576816   84259 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:04:10.576915   84259 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:04:10.576991   84259 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:04:10.577107   84259 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:04:10.577214   84259 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:04:10.577319   84259 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:04:10.577433   84259 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:04:10.577522   84259 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:04:10.577616   84259 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:04:10.870887   84259 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:04:10.977055   84259 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 00:04:11.160320   84259 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:04:11.207864   84259 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:04:11.315037   84259 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:04:11.315715   84259 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:04:11.319135   84259 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:04:09.197254   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:11.197831   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:11.321063   84259 out.go:235]   - Booting up control plane ...
	I1210 00:04:11.321193   84259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:04:11.321296   84259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:04:11.322134   84259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:04:11.341567   84259 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:04:11.347658   84259 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:04:11.347736   84259 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:04:11.492127   84259 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 00:04:11.492293   84259 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 00:04:11.994410   84259 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.01471ms
	I1210 00:04:11.994533   84259 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 00:04:13.697731   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:16.198771   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:16.998638   84259 kubeadm.go:310] [api-check] The API server is healthy after 5.002934845s
	I1210 00:04:17.009930   84259 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 00:04:17.025483   84259 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 00:04:17.059445   84259 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 00:04:17.059762   84259 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-871210 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 00:04:17.071665   84259 kubeadm.go:310] [bootstrap-token] Using token: 1x1r34.gs3p33sqgju9dylj
	I1210 00:04:17.073936   84259 out.go:235]   - Configuring RBAC rules ...
	I1210 00:04:17.074055   84259 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 00:04:17.079920   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 00:04:17.090408   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 00:04:17.094072   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 00:04:17.096951   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 00:04:17.099929   84259 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 00:04:17.404431   84259 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 00:04:17.837125   84259 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 00:04:18.404721   84259 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 00:04:18.405661   84259 kubeadm.go:310] 
	I1210 00:04:18.405757   84259 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 00:04:18.405770   84259 kubeadm.go:310] 
	I1210 00:04:18.405871   84259 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 00:04:18.405883   84259 kubeadm.go:310] 
	I1210 00:04:18.405916   84259 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 00:04:18.406012   84259 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 00:04:18.406101   84259 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 00:04:18.406111   84259 kubeadm.go:310] 
	I1210 00:04:18.406197   84259 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 00:04:18.406209   84259 kubeadm.go:310] 
	I1210 00:04:18.406318   84259 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 00:04:18.406329   84259 kubeadm.go:310] 
	I1210 00:04:18.406412   84259 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 00:04:18.406482   84259 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 00:04:18.406544   84259 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 00:04:18.406550   84259 kubeadm.go:310] 
	I1210 00:04:18.406643   84259 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 00:04:18.406787   84259 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 00:04:18.406802   84259 kubeadm.go:310] 
	I1210 00:04:18.406919   84259 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 1x1r34.gs3p33sqgju9dylj \
	I1210 00:04:18.407089   84259 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b \
	I1210 00:04:18.407117   84259 kubeadm.go:310] 	--control-plane 
	I1210 00:04:18.407123   84259 kubeadm.go:310] 
	I1210 00:04:18.407207   84259 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 00:04:18.407214   84259 kubeadm.go:310] 
	I1210 00:04:18.407300   84259 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 1x1r34.gs3p33sqgju9dylj \
	I1210 00:04:18.407433   84259 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b 
	I1210 00:04:18.408197   84259 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:04:18.408272   84259 cni.go:84] Creating CNI manager for ""
	I1210 00:04:18.408289   84259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:04:18.410841   84259 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:04:18.697285   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:20.698266   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:18.412209   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:04:18.422123   84259 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:04:18.440972   84259 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:04:18.441058   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:18.441138   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-871210 minikube.k8s.io/updated_at=2024_12_10T00_04_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=default-k8s-diff-port-871210 minikube.k8s.io/primary=true
	I1210 00:04:18.462185   84259 ops.go:34] apiserver oom_adj: -16
	I1210 00:04:18.625855   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:19.126760   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:19.626829   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:20.126663   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:20.626893   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:21.126851   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:21.625892   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:22.126904   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:22.626450   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:22.715909   84259 kubeadm.go:1113] duration metric: took 4.274924102s to wait for elevateKubeSystemPrivileges
	I1210 00:04:22.715958   84259 kubeadm.go:394] duration metric: took 5m1.740971462s to StartCluster
	I1210 00:04:22.715982   84259 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:04:22.716080   84259 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1210 00:04:22.718538   84259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:04:22.718890   84259 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:04:22.718958   84259 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:04:22.719059   84259 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-871210"
	I1210 00:04:22.719079   84259 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-871210"
	W1210 00:04:22.719091   84259 addons.go:243] addon storage-provisioner should already be in state true
	I1210 00:04:22.719100   84259 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-871210"
	I1210 00:04:22.719123   84259 host.go:66] Checking if "default-k8s-diff-port-871210" exists ...
	I1210 00:04:22.719120   84259 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-871210"
	I1210 00:04:22.719169   84259 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-871210"
	W1210 00:04:22.719184   84259 addons.go:243] addon metrics-server should already be in state true
	I1210 00:04:22.719216   84259 host.go:66] Checking if "default-k8s-diff-port-871210" exists ...
	I1210 00:04:22.719133   84259 config.go:182] Loaded profile config "default-k8s-diff-port-871210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:04:22.719134   84259 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-871210"
	I1210 00:04:22.719582   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.719631   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.719717   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.719728   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.719753   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.719763   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.720519   84259 out.go:177] * Verifying Kubernetes components...
	I1210 00:04:22.722140   84259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:04:22.735592   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44099
	I1210 00:04:22.735716   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36311
	I1210 00:04:22.736075   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.736100   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.736605   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.736626   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.736605   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.736642   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.736995   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.737007   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.737217   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.737595   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.737639   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.739278   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41255
	I1210 00:04:22.741315   84259 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-871210"
	W1210 00:04:22.741337   84259 addons.go:243] addon default-storageclass should already be in state true
	I1210 00:04:22.741367   84259 host.go:66] Checking if "default-k8s-diff-port-871210" exists ...
	I1210 00:04:22.741739   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.741778   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.742259   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.742829   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.742858   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.743182   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.743632   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.743663   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.754879   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1210 00:04:22.755310   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.756015   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.756032   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.756397   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.756576   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.758535   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1210 00:04:22.759514   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33065
	I1210 00:04:22.759891   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.759925   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39821
	I1210 00:04:22.760309   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.760330   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.760346   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.760435   84259 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 00:04:22.760689   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.760831   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.760845   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.760860   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.761166   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.761589   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.761627   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.761737   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 00:04:22.761754   84259 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 00:04:22.761774   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1210 00:04:22.762882   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1210 00:04:22.764440   84259 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:04:22.764810   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.765316   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1210 00:04:22.765339   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.765474   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1210 00:04:22.765675   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1210 00:04:22.765828   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1210 00:04:22.765894   84259 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:04:22.765912   84259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:04:22.765919   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1210 00:04:22.765934   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1210 00:04:22.768979   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.769446   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1210 00:04:22.769498   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.769626   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1210 00:04:22.769832   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1210 00:04:22.769956   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1210 00:04:22.770078   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1210 00:04:22.780995   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45841
	I1210 00:04:22.781451   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.782077   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.782098   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.782467   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.782705   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.784771   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1210 00:04:22.785015   84259 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:04:22.785030   84259 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:04:22.785047   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1210 00:04:22.788208   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.788659   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1210 00:04:22.788690   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.788771   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1210 00:04:22.789148   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1210 00:04:22.789275   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1210 00:04:22.789386   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1210 00:04:22.926076   84259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:04:22.945059   84259 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-871210" to be "Ready" ...
	I1210 00:04:22.970684   84259 node_ready.go:49] node "default-k8s-diff-port-871210" has status "Ready":"True"
	I1210 00:04:22.970727   84259 node_ready.go:38] duration metric: took 25.618738ms for node "default-k8s-diff-port-871210" to be "Ready" ...
	I1210 00:04:22.970740   84259 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:04:22.989661   84259 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:23.045411   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 00:04:23.045433   84259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 00:04:23.074907   84259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:04:23.076857   84259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:04:23.094809   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 00:04:23.094836   84259 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 00:04:23.125107   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:04:23.125136   84259 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 00:04:23.174286   84259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:04:23.554521   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554556   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.554568   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554589   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.554855   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.554871   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.554880   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554882   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.554888   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.554893   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.554891   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.554903   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554913   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.555087   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.555099   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.555283   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.555354   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.555379   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.579741   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.579775   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.580075   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.580091   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.580113   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.776674   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.776701   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.777020   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.777060   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.777068   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.777076   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.777089   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.777314   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.777330   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.777347   84259 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-871210"
	I1210 00:04:23.778992   84259 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 00:04:23.198263   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:25.198299   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:23.780354   84259 addons.go:510] duration metric: took 1.061403814s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 00:04:25.012490   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:27.495987   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:27.697345   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:29.698123   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:31.692183   83859 pod_ready.go:82] duration metric: took 4m0.000797786s for pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace to be "Ready" ...
	E1210 00:04:31.692211   83859 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 00:04:31.692228   83859 pod_ready.go:39] duration metric: took 4m11.541153015s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:04:31.692258   83859 kubeadm.go:597] duration metric: took 4m19.334550967s to restartPrimaryControlPlane
	W1210 00:04:31.692305   83859 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 00:04:31.692329   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:04:29.995452   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:31.997927   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:34.496689   84259 pod_ready.go:93] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.496716   84259 pod_ready.go:82] duration metric: took 11.507027662s for pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.496730   84259 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.501166   84259 pod_ready.go:93] pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.501188   84259 pod_ready.go:82] duration metric: took 4.45016ms for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.501198   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.506061   84259 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.506085   84259 pod_ready.go:82] duration metric: took 4.881919ms for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.506096   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.510401   84259 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.510422   84259 pod_ready.go:82] duration metric: took 4.320761ms for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.510432   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pj85d" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.514319   84259 pod_ready.go:93] pod "kube-proxy-pj85d" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.514340   84259 pod_ready.go:82] duration metric: took 3.902541ms for pod "kube-proxy-pj85d" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.514348   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.896369   84259 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.896393   84259 pod_ready.go:82] duration metric: took 382.038726ms for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.896401   84259 pod_ready.go:39] duration metric: took 11.925650242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:04:34.896415   84259 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:04:34.896466   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:04:34.910589   84259 api_server.go:72] duration metric: took 12.191654524s to wait for apiserver process to appear ...
	I1210 00:04:34.910617   84259 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:04:34.910639   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1210 00:04:34.916503   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I1210 00:04:34.917955   84259 api_server.go:141] control plane version: v1.31.2
	I1210 00:04:34.917979   84259 api_server.go:131] duration metric: took 7.355946ms to wait for apiserver health ...
	I1210 00:04:34.917987   84259 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:04:35.098220   84259 system_pods.go:59] 9 kube-system pods found
	I1210 00:04:35.098252   84259 system_pods.go:61] "coredns-7c65d6cfc9-7xpcc" [6cff7e56-1785-41c8-bd9c-db9d3f0bd05f] Running
	I1210 00:04:35.098259   84259 system_pods.go:61] "coredns-7c65d6cfc9-z2n25" [b6b81952-7281-4705-9536-06eb939a5807] Running
	I1210 00:04:35.098265   84259 system_pods.go:61] "etcd-default-k8s-diff-port-871210" [e9c747a1-72c0-4b30-b401-c39a41fe5eb5] Running
	I1210 00:04:35.098271   84259 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-871210" [1a27b619-b576-4a36-b88a-0c498ad38628] Running
	I1210 00:04:35.098276   84259 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-871210" [d22a69b7-3389-46a9-9ca5-a3101aa34e05] Running
	I1210 00:04:35.098280   84259 system_pods.go:61] "kube-proxy-pj85d" [d1b9b056-f4a3-419c-86fa-a94d88464f74] Running
	I1210 00:04:35.098285   84259 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-871210" [50a03a5c-d0e5-454a-9a7b-49963ff59d06] Running
	I1210 00:04:35.098294   84259 system_pods.go:61] "metrics-server-6867b74b74-7g2qm" [49ac129a-c85d-4af1-b3b2-06bc10bced77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:04:35.098298   84259 system_pods.go:61] "storage-provisioner" [ea716edd-4030-4ec3-b094-c3a50154b473] Running
	I1210 00:04:35.098309   84259 system_pods.go:74] duration metric: took 180.315426ms to wait for pod list to return data ...
	I1210 00:04:35.098322   84259 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:04:35.294342   84259 default_sa.go:45] found service account: "default"
	I1210 00:04:35.294367   84259 default_sa.go:55] duration metric: took 196.039183ms for default service account to be created ...
	I1210 00:04:35.294376   84259 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:04:35.497331   84259 system_pods.go:86] 9 kube-system pods found
	I1210 00:04:35.497364   84259 system_pods.go:89] "coredns-7c65d6cfc9-7xpcc" [6cff7e56-1785-41c8-bd9c-db9d3f0bd05f] Running
	I1210 00:04:35.497369   84259 system_pods.go:89] "coredns-7c65d6cfc9-z2n25" [b6b81952-7281-4705-9536-06eb939a5807] Running
	I1210 00:04:35.497373   84259 system_pods.go:89] "etcd-default-k8s-diff-port-871210" [e9c747a1-72c0-4b30-b401-c39a41fe5eb5] Running
	I1210 00:04:35.497377   84259 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-871210" [1a27b619-b576-4a36-b88a-0c498ad38628] Running
	I1210 00:04:35.497381   84259 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-871210" [d22a69b7-3389-46a9-9ca5-a3101aa34e05] Running
	I1210 00:04:35.497384   84259 system_pods.go:89] "kube-proxy-pj85d" [d1b9b056-f4a3-419c-86fa-a94d88464f74] Running
	I1210 00:04:35.497387   84259 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-871210" [50a03a5c-d0e5-454a-9a7b-49963ff59d06] Running
	I1210 00:04:35.497396   84259 system_pods.go:89] "metrics-server-6867b74b74-7g2qm" [49ac129a-c85d-4af1-b3b2-06bc10bced77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:04:35.497400   84259 system_pods.go:89] "storage-provisioner" [ea716edd-4030-4ec3-b094-c3a50154b473] Running
	I1210 00:04:35.497409   84259 system_pods.go:126] duration metric: took 203.02694ms to wait for k8s-apps to be running ...
	I1210 00:04:35.497416   84259 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:04:35.497456   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:04:35.512217   84259 system_svc.go:56] duration metric: took 14.792056ms WaitForService to wait for kubelet
	I1210 00:04:35.512248   84259 kubeadm.go:582] duration metric: took 12.793318604s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:04:35.512274   84259 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:04:35.695292   84259 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:04:35.695318   84259 node_conditions.go:123] node cpu capacity is 2
	I1210 00:04:35.695329   84259 node_conditions.go:105] duration metric: took 183.048181ms to run NodePressure ...
	I1210 00:04:35.695341   84259 start.go:241] waiting for startup goroutines ...
	I1210 00:04:35.695348   84259 start.go:246] waiting for cluster config update ...
	I1210 00:04:35.695361   84259 start.go:255] writing updated cluster config ...
	I1210 00:04:35.695666   84259 ssh_runner.go:195] Run: rm -f paused
	I1210 00:04:35.742539   84259 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:04:35.744394   84259 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-871210" cluster and "default" namespace by default
	I1210 00:04:57.851348   83859 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.159001958s)
	I1210 00:04:57.851413   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:04:57.866601   83859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:04:57.876643   83859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:04:57.886172   83859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:04:57.886200   83859 kubeadm.go:157] found existing configuration files:
	
	I1210 00:04:57.886252   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:04:57.895643   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:04:57.895722   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:04:57.905397   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:04:57.914236   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:04:57.914299   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:04:57.923225   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:04:57.932422   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:04:57.932476   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:04:57.942840   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:04:57.952087   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:04:57.952159   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:04:57.961371   83859 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:04:58.005314   83859 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 00:04:58.005444   83859 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:04:58.098287   83859 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:04:58.098431   83859 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:04:58.098591   83859 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 00:04:58.106525   83859 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:04:58.109142   83859 out.go:235]   - Generating certificates and keys ...
	I1210 00:04:58.109219   83859 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:04:58.109320   83859 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:04:58.109456   83859 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:04:58.109536   83859 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:04:58.109637   83859 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:04:58.109728   83859 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:04:58.109840   83859 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:04:58.109940   83859 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:04:58.110047   83859 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:04:58.110152   83859 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:04:58.110225   83859 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:04:58.110296   83859 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:04:58.357649   83859 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:04:58.505840   83859 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 00:04:58.890560   83859 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:04:58.965928   83859 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:04:59.341665   83859 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:04:59.342240   83859 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:04:59.344644   83859 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:04:59.346225   83859 out.go:235]   - Booting up control plane ...
	I1210 00:04:59.346353   83859 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:04:59.348071   83859 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:04:59.348893   83859 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:04:59.366824   83859 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:04:59.372906   83859 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:04:59.372962   83859 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:04:59.509554   83859 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 00:04:59.509695   83859 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 00:05:00.014472   83859 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.085309ms
	I1210 00:05:00.014639   83859 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 00:05:05.017162   83859 kubeadm.go:310] [api-check] The API server is healthy after 5.002677832s
	I1210 00:05:05.029078   83859 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 00:05:05.048524   83859 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 00:05:05.080458   83859 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 00:05:05.080735   83859 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-048296 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 00:05:05.091793   83859 kubeadm.go:310] [bootstrap-token] Using token: y5jzuu.af7syybsclcivlzq
	I1210 00:05:05.093253   83859 out.go:235]   - Configuring RBAC rules ...
	I1210 00:05:05.093401   83859 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 00:05:05.098842   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 00:05:05.106341   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 00:05:05.109935   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 00:05:05.114403   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 00:05:05.121436   83859 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 00:05:05.424690   83859 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 00:05:05.851096   83859 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 00:05:06.425724   83859 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 00:05:06.426713   83859 kubeadm.go:310] 
	I1210 00:05:06.426785   83859 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 00:05:06.426808   83859 kubeadm.go:310] 
	I1210 00:05:06.426904   83859 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 00:05:06.426936   83859 kubeadm.go:310] 
	I1210 00:05:06.426981   83859 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 00:05:06.427061   83859 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 00:05:06.427110   83859 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 00:05:06.427120   83859 kubeadm.go:310] 
	I1210 00:05:06.427199   83859 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 00:05:06.427210   83859 kubeadm.go:310] 
	I1210 00:05:06.427282   83859 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 00:05:06.427294   83859 kubeadm.go:310] 
	I1210 00:05:06.427381   83859 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 00:05:06.427486   83859 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 00:05:06.427623   83859 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 00:05:06.427635   83859 kubeadm.go:310] 
	I1210 00:05:06.427757   83859 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 00:05:06.427874   83859 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 00:05:06.427905   83859 kubeadm.go:310] 
	I1210 00:05:06.428032   83859 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y5jzuu.af7syybsclcivlzq \
	I1210 00:05:06.428167   83859 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b \
	I1210 00:05:06.428201   83859 kubeadm.go:310] 	--control-plane 
	I1210 00:05:06.428211   83859 kubeadm.go:310] 
	I1210 00:05:06.428322   83859 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 00:05:06.428332   83859 kubeadm.go:310] 
	I1210 00:05:06.428438   83859 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y5jzuu.af7syybsclcivlzq \
	I1210 00:05:06.428572   83859 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b 
	I1210 00:05:06.428746   83859 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:05:06.428832   83859 cni.go:84] Creating CNI manager for ""
	I1210 00:05:06.428849   83859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:05:06.431674   83859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:05:06.433006   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:05:06.444058   83859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:05:06.462707   83859 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:05:06.462838   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:06.462873   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-048296 minikube.k8s.io/updated_at=2024_12_10T00_05_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=no-preload-048296 minikube.k8s.io/primary=true
	I1210 00:05:06.493379   83859 ops.go:34] apiserver oom_adj: -16
	I1210 00:05:06.666080   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:07.166762   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:07.666408   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:08.166269   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:08.666734   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:09.166797   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:09.666522   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:10.166230   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:10.667061   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:10.774149   83859 kubeadm.go:1113] duration metric: took 4.311371383s to wait for elevateKubeSystemPrivileges
	I1210 00:05:10.774188   83859 kubeadm.go:394] duration metric: took 4m58.464009996s to StartCluster
	I1210 00:05:10.774211   83859 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:05:10.774290   83859 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1210 00:05:10.777711   83859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:05:10.778040   83859 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:05:10.778133   83859 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:05:10.778230   83859 addons.go:69] Setting storage-provisioner=true in profile "no-preload-048296"
	I1210 00:05:10.778247   83859 addons.go:234] Setting addon storage-provisioner=true in "no-preload-048296"
	W1210 00:05:10.778255   83859 addons.go:243] addon storage-provisioner should already be in state true
	I1210 00:05:10.778249   83859 addons.go:69] Setting default-storageclass=true in profile "no-preload-048296"
	I1210 00:05:10.778261   83859 addons.go:69] Setting metrics-server=true in profile "no-preload-048296"
	I1210 00:05:10.778276   83859 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-048296"
	I1210 00:05:10.778286   83859 host.go:66] Checking if "no-preload-048296" exists ...
	I1210 00:05:10.778299   83859 addons.go:234] Setting addon metrics-server=true in "no-preload-048296"
	I1210 00:05:10.778339   83859 config.go:182] Loaded profile config "no-preload-048296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1210 00:05:10.778394   83859 addons.go:243] addon metrics-server should already be in state true
	I1210 00:05:10.778446   83859 host.go:66] Checking if "no-preload-048296" exists ...
	I1210 00:05:10.778684   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.778716   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.778746   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.778721   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.778860   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.778897   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.779748   83859 out.go:177] * Verifying Kubernetes components...
	I1210 00:05:10.781467   83859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:05:10.795108   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34465
	I1210 00:05:10.795227   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I1210 00:05:10.795615   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.795709   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.796135   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.796160   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.796221   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.796240   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.796522   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.796539   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.797128   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.797162   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.797200   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.797167   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.797697   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44173
	I1210 00:05:10.798091   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.798587   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.798613   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.798982   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.799324   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.803238   83859 addons.go:234] Setting addon default-storageclass=true in "no-preload-048296"
	W1210 00:05:10.803262   83859 addons.go:243] addon default-storageclass should already be in state true
	I1210 00:05:10.803291   83859 host.go:66] Checking if "no-preload-048296" exists ...
	I1210 00:05:10.803683   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.803722   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.814000   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39657
	I1210 00:05:10.814046   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I1210 00:05:10.814470   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.814686   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.814992   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.815014   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.815427   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.815448   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.815515   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.815748   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.815974   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.816171   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.817989   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1210 00:05:10.818439   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1210 00:05:10.820342   83859 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 00:05:10.820360   83859 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:05:10.821535   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 00:05:10.821556   83859 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 00:05:10.821577   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1210 00:05:10.821652   83859 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:05:10.821673   83859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:05:10.821690   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1210 00:05:10.825763   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826205   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1210 00:05:10.826228   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42881
	I1210 00:05:10.826252   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826275   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1210 00:05:10.826294   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826327   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1210 00:05:10.826228   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826504   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1210 00:05:10.826563   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1210 00:05:10.826633   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.826711   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1210 00:05:10.826860   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1210 00:05:10.826864   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1210 00:05:10.827029   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1210 00:05:10.827152   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.827173   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.827241   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1210 00:05:10.827691   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.829421   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.829471   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.867902   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45189
	I1210 00:05:10.868566   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.869138   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.869169   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.869575   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.869782   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.871531   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1210 00:05:10.871792   83859 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:05:10.871810   83859 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:05:10.871832   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1210 00:05:10.874309   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.874822   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1210 00:05:10.874845   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.874999   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1210 00:05:10.875202   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1210 00:05:10.875354   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1210 00:05:10.875501   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1210 00:05:11.009426   83859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:05:11.030237   83859 node_ready.go:35] waiting up to 6m0s for node "no-preload-048296" to be "Ready" ...
	I1210 00:05:11.054344   83859 node_ready.go:49] node "no-preload-048296" has status "Ready":"True"
	I1210 00:05:11.054366   83859 node_ready.go:38] duration metric: took 24.096361ms for node "no-preload-048296" to be "Ready" ...
	I1210 00:05:11.054376   83859 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:05:11.071740   83859 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:11.123723   83859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:05:11.137744   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 00:05:11.137772   83859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 00:05:11.159680   83859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:05:11.179245   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 00:05:11.179267   83859 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 00:05:11.240553   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:05:11.240580   83859 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 00:05:11.311813   83859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:05:12.041427   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.041463   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.041509   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.041532   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.041886   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.041949   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.041954   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.041963   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.041967   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.041972   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.041932   83859 main.go:141] libmachine: (no-preload-048296) DBG | Closing plugin on server side
	I1210 00:05:12.041986   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.042087   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.042229   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.042245   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.042297   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.042319   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.092616   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.092639   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.092910   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.092923   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.535943   83859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.224081671s)
	I1210 00:05:12.536008   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.536020   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.536456   83859 main.go:141] libmachine: (no-preload-048296) DBG | Closing plugin on server side
	I1210 00:05:12.536459   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.536482   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.536491   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.536498   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.536728   83859 main.go:141] libmachine: (no-preload-048296) DBG | Closing plugin on server side
	I1210 00:05:12.536735   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.536750   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.536761   83859 addons.go:475] Verifying addon metrics-server=true in "no-preload-048296"
	I1210 00:05:12.538638   83859 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 00:05:12.539910   83859 addons.go:510] duration metric: took 1.761785575s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 00:05:13.078954   83859 pod_ready.go:103] pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:05:13.579737   83859 pod_ready.go:93] pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:13.579760   83859 pod_ready.go:82] duration metric: took 2.507994954s for pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:13.579770   83859 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8rxx7" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:14.586682   83859 pod_ready.go:93] pod "coredns-7c65d6cfc9-8rxx7" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:14.586704   83859 pod_ready.go:82] duration metric: took 1.006927767s for pod "coredns-7c65d6cfc9-8rxx7" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:14.586714   83859 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.092318   83859 pod_ready.go:93] pod "etcd-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.092345   83859 pod_ready.go:82] duration metric: took 505.624811ms for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.092356   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.096802   83859 pod_ready.go:93] pod "kube-apiserver-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.096828   83859 pod_ready.go:82] duration metric: took 4.463914ms for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.096839   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.100904   83859 pod_ready.go:93] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.100923   83859 pod_ready.go:82] duration metric: took 4.07832ms for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.100932   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qklxb" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.176527   83859 pod_ready.go:93] pod "kube-proxy-qklxb" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.176552   83859 pod_ready.go:82] duration metric: took 75.613294ms for pod "kube-proxy-qklxb" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.176562   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:17.182952   83859 pod_ready.go:103] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:05:18.181993   83859 pod_ready.go:93] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:18.182016   83859 pod_ready.go:82] duration metric: took 3.005447779s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:18.182024   83859 pod_ready.go:39] duration metric: took 7.127639413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:05:18.182038   83859 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:05:18.182084   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:05:18.196127   83859 api_server.go:72] duration metric: took 7.418043985s to wait for apiserver process to appear ...
	I1210 00:05:18.196155   83859 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:05:18.196176   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:05:18.200537   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 200:
	ok
	I1210 00:05:18.201476   83859 api_server.go:141] control plane version: v1.31.2
	I1210 00:05:18.201501   83859 api_server.go:131] duration metric: took 5.340199ms to wait for apiserver health ...
	I1210 00:05:18.201508   83859 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:05:18.208072   83859 system_pods.go:59] 9 kube-system pods found
	I1210 00:05:18.208105   83859 system_pods.go:61] "coredns-7c65d6cfc9-56djc" [2e25bad9-b88a-4ac8-a180-968bf6b057a2] Running
	I1210 00:05:18.208114   83859 system_pods.go:61] "coredns-7c65d6cfc9-8rxx7" [3fdfd41b-8ef7-41af-a703-ebecfe9ad319] Running
	I1210 00:05:18.208119   83859 system_pods.go:61] "etcd-no-preload-048296" [4447a7d6-df4b-4760-9352-4df0310017ee] Running
	I1210 00:05:18.208125   83859 system_pods.go:61] "kube-apiserver-no-preload-048296" [55a06182-1520-4497-8358-639438eeb297] Running
	I1210 00:05:18.208130   83859 system_pods.go:61] "kube-controller-manager-no-preload-048296" [f30bdb21-7742-46e7-90fe-403679f5e02a] Running
	I1210 00:05:18.208136   83859 system_pods.go:61] "kube-proxy-qklxb" [8bb029a1-abf9-4825-b9ec-0520a78cb3d8] Running
	I1210 00:05:18.208141   83859 system_pods.go:61] "kube-scheduler-no-preload-048296" [019d6d37-c586-4f12-8daf-b60752223cf1] Running
	I1210 00:05:18.208152   83859 system_pods.go:61] "metrics-server-6867b74b74-n2f8c" [8e9f56c9-fd67-4715-9148-1255be17f1fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:05:18.208163   83859 system_pods.go:61] "storage-provisioner" [55fe311f-4610-4805-9fb7-3f1cac7c96e6] Running
	I1210 00:05:18.208176   83859 system_pods.go:74] duration metric: took 6.661729ms to wait for pod list to return data ...
	I1210 00:05:18.208187   83859 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:05:18.375431   83859 default_sa.go:45] found service account: "default"
	I1210 00:05:18.375458   83859 default_sa.go:55] duration metric: took 167.260728ms for default service account to be created ...
	I1210 00:05:18.375467   83859 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:05:18.578148   83859 system_pods.go:86] 9 kube-system pods found
	I1210 00:05:18.578176   83859 system_pods.go:89] "coredns-7c65d6cfc9-56djc" [2e25bad9-b88a-4ac8-a180-968bf6b057a2] Running
	I1210 00:05:18.578183   83859 system_pods.go:89] "coredns-7c65d6cfc9-8rxx7" [3fdfd41b-8ef7-41af-a703-ebecfe9ad319] Running
	I1210 00:05:18.578187   83859 system_pods.go:89] "etcd-no-preload-048296" [4447a7d6-df4b-4760-9352-4df0310017ee] Running
	I1210 00:05:18.578191   83859 system_pods.go:89] "kube-apiserver-no-preload-048296" [55a06182-1520-4497-8358-639438eeb297] Running
	I1210 00:05:18.578195   83859 system_pods.go:89] "kube-controller-manager-no-preload-048296" [f30bdb21-7742-46e7-90fe-403679f5e02a] Running
	I1210 00:05:18.578198   83859 system_pods.go:89] "kube-proxy-qklxb" [8bb029a1-abf9-4825-b9ec-0520a78cb3d8] Running
	I1210 00:05:18.578203   83859 system_pods.go:89] "kube-scheduler-no-preload-048296" [019d6d37-c586-4f12-8daf-b60752223cf1] Running
	I1210 00:05:18.578209   83859 system_pods.go:89] "metrics-server-6867b74b74-n2f8c" [8e9f56c9-fd67-4715-9148-1255be17f1fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:05:18.578213   83859 system_pods.go:89] "storage-provisioner" [55fe311f-4610-4805-9fb7-3f1cac7c96e6] Running
	I1210 00:05:18.578223   83859 system_pods.go:126] duration metric: took 202.750724ms to wait for k8s-apps to be running ...
	I1210 00:05:18.578233   83859 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:05:18.578277   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:05:18.592635   83859 system_svc.go:56] duration metric: took 14.391582ms WaitForService to wait for kubelet
	I1210 00:05:18.592669   83859 kubeadm.go:582] duration metric: took 7.814589832s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:05:18.592690   83859 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:05:18.776611   83859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:05:18.776632   83859 node_conditions.go:123] node cpu capacity is 2
	I1210 00:05:18.776642   83859 node_conditions.go:105] duration metric: took 183.94737ms to run NodePressure ...
	I1210 00:05:18.776654   83859 start.go:241] waiting for startup goroutines ...
	I1210 00:05:18.776660   83859 start.go:246] waiting for cluster config update ...
	I1210 00:05:18.776672   83859 start.go:255] writing updated cluster config ...
	I1210 00:05:18.776944   83859 ssh_runner.go:195] Run: rm -f paused
	I1210 00:05:18.826550   83859 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:05:18.828405   83859 out.go:177] * Done! kubectl is now configured to use "no-preload-048296" cluster and "default" namespace by default
	I1210 00:05:43.088214   84547 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 00:05:43.088352   84547 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 00:05:43.089912   84547 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 00:05:43.089988   84547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:05:43.090054   84547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:05:43.090140   84547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:05:43.090225   84547 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 00:05:43.090302   84547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:05:43.092050   84547 out.go:235]   - Generating certificates and keys ...
	I1210 00:05:43.092141   84547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:05:43.092210   84547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:05:43.092305   84547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:05:43.092392   84547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:05:43.092493   84547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:05:43.092569   84547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:05:43.092680   84547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:05:43.092761   84547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:05:43.092865   84547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:05:43.092975   84547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:05:43.093045   84547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:05:43.093143   84547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:05:43.093188   84547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:05:43.093236   84547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:05:43.093317   84547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:05:43.093402   84547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:05:43.093561   84547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:05:43.093728   84547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:05:43.093785   84547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:05:43.093855   84547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:05:43.095285   84547 out.go:235]   - Booting up control plane ...
	I1210 00:05:43.095396   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:05:43.095469   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:05:43.095525   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:05:43.095630   84547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:05:43.095804   84547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 00:05:43.095873   84547 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 00:05:43.095960   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096155   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096237   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096414   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096486   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096679   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096741   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096904   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096969   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.097122   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.097129   84547 kubeadm.go:310] 
	I1210 00:05:43.097167   84547 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 00:05:43.097202   84547 kubeadm.go:310] 		timed out waiting for the condition
	I1210 00:05:43.097211   84547 kubeadm.go:310] 
	I1210 00:05:43.097251   84547 kubeadm.go:310] 	This error is likely caused by:
	I1210 00:05:43.097280   84547 kubeadm.go:310] 		- The kubelet is not running
	I1210 00:05:43.097373   84547 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 00:05:43.097379   84547 kubeadm.go:310] 
	I1210 00:05:43.097465   84547 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 00:05:43.097495   84547 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 00:05:43.097522   84547 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 00:05:43.097531   84547 kubeadm.go:310] 
	I1210 00:05:43.097619   84547 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 00:05:43.097688   84547 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 00:05:43.097694   84547 kubeadm.go:310] 
	I1210 00:05:43.097794   84547 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 00:05:43.097875   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 00:05:43.097941   84547 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 00:05:43.098038   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 00:05:43.098124   84547 kubeadm.go:310] 
	W1210 00:05:43.098179   84547 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 00:05:43.098224   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:05:48.532537   84547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.434287494s)
	I1210 00:05:48.532615   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:05:48.546394   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:05:48.555650   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:05:48.555669   84547 kubeadm.go:157] found existing configuration files:
	
	I1210 00:05:48.555721   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:05:48.565301   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:05:48.565368   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:05:48.575330   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:05:48.584238   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:05:48.584324   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:05:48.593395   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:05:48.602150   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:05:48.602209   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:05:48.611177   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:05:48.619837   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:05:48.619907   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:05:48.629126   84547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:05:48.827680   84547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:07:44.857816   84547 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 00:07:44.857930   84547 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 00:07:44.859490   84547 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 00:07:44.859542   84547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:07:44.859627   84547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:07:44.859758   84547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:07:44.859894   84547 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 00:07:44.859953   84547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:07:44.861538   84547 out.go:235]   - Generating certificates and keys ...
	I1210 00:07:44.861657   84547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:07:44.861736   84547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:07:44.861809   84547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:07:44.861861   84547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:07:44.861920   84547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:07:44.861969   84547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:07:44.862024   84547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:07:44.862077   84547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:07:44.862143   84547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:07:44.862212   84547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:07:44.862246   84547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:07:44.862293   84547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:07:44.862343   84547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:07:44.862408   84547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:07:44.862505   84547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:07:44.862615   84547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:07:44.862765   84547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:07:44.862897   84547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:07:44.862955   84547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:07:44.863059   84547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:07:44.865485   84547 out.go:235]   - Booting up control plane ...
	I1210 00:07:44.865595   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:07:44.865720   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:07:44.865815   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:07:44.865929   84547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:07:44.866136   84547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 00:07:44.866209   84547 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 00:07:44.866332   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866517   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.866586   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866752   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.866818   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866976   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867042   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.867210   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867323   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.867512   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867522   84547 kubeadm.go:310] 
	I1210 00:07:44.867611   84547 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 00:07:44.867671   84547 kubeadm.go:310] 		timed out waiting for the condition
	I1210 00:07:44.867691   84547 kubeadm.go:310] 
	I1210 00:07:44.867729   84547 kubeadm.go:310] 	This error is likely caused by:
	I1210 00:07:44.867766   84547 kubeadm.go:310] 		- The kubelet is not running
	I1210 00:07:44.867880   84547 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 00:07:44.867895   84547 kubeadm.go:310] 
	I1210 00:07:44.868055   84547 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 00:07:44.868105   84547 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 00:07:44.868138   84547 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 00:07:44.868146   84547 kubeadm.go:310] 
	I1210 00:07:44.868250   84547 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 00:07:44.868354   84547 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 00:07:44.868362   84547 kubeadm.go:310] 
	I1210 00:07:44.868464   84547 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 00:07:44.868540   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 00:07:44.868606   84547 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 00:07:44.868689   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 00:07:44.868741   84547 kubeadm.go:310] 
	I1210 00:07:44.868762   84547 kubeadm.go:394] duration metric: took 8m1.943355039s to StartCluster
	I1210 00:07:44.868799   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:07:44.868852   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:07:44.906641   84547 cri.go:89] found id: ""
	I1210 00:07:44.906667   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.906675   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:07:44.906681   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:07:44.906734   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:07:44.942832   84547 cri.go:89] found id: ""
	I1210 00:07:44.942863   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.942872   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:07:44.942881   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:07:44.942945   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:07:44.978010   84547 cri.go:89] found id: ""
	I1210 00:07:44.978034   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.978042   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:07:44.978047   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:07:44.978108   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:07:45.017066   84547 cri.go:89] found id: ""
	I1210 00:07:45.017089   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.017097   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:07:45.017110   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:07:45.017172   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:07:45.049757   84547 cri.go:89] found id: ""
	I1210 00:07:45.049778   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.049786   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:07:45.049791   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:07:45.049842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:07:45.082754   84547 cri.go:89] found id: ""
	I1210 00:07:45.082789   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.082800   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:07:45.082807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:07:45.082933   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:07:45.117188   84547 cri.go:89] found id: ""
	I1210 00:07:45.117219   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.117227   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:07:45.117233   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:07:45.117302   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:07:45.153747   84547 cri.go:89] found id: ""
	I1210 00:07:45.153776   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.153785   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:07:45.153795   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:07:45.153810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:07:45.207625   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:07:45.207662   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:07:45.220929   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:07:45.220957   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:07:45.291850   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:07:45.291883   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:07:45.291899   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:07:45.397592   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:07:45.397629   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 00:07:45.446755   84547 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1210 00:07:45.446816   84547 out.go:270] * 
	W1210 00:07:45.446898   84547 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 00:07:45.446921   84547 out.go:270] * 
	W1210 00:07:45.448145   84547 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 00:07:45.452118   84547 out.go:201] 
	W1210 00:07:45.453335   84547 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 00:07:45.453398   84547 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 00:07:45.453438   84547 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 00:07:45.455704   84547 out.go:201] 
	
	
	==> CRI-O <==
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.805049320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c5ad5a0-d647-492e-b6df-6fb48058ac54 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.805245485Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97d81c851470ff8904f2a704c67542ee6316104d04ab675437a01ea6f07fbfd8,PodSandboxId:3e63d3f10ab367d7f1654c75a37a1da60094faf8cd712564e70d0d5dba1e0e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733789112587816469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55fe311f-4610-4805-9fb7-3f1cac7c96e6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b237f65b1f52cc0946de1dff121714fb7635a1fbe982c80c699291423b82ccf8,PodSandboxId:37db34926854c5a7226a215f95806d6c5ad4d2b2363dc7f29cd5e4ee4c161c10,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789112185958500,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-56djc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e25bad9-b88a-4ac8-a180-968bf6b057a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a0e82982a44e43e0ed41ee3568d34c43378472c4ad70abc0278c8546b5eab1,PodSandboxId:b3eaf6f8899f7f7c5723661565f19184e409f490d8d44b49681ddbfa54f074e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789112059041810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8rxx7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f
dfd41b-8ef7-41af-a703-ebecfe9ad319,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b333a8bf49675208e97a25fca7ababa5c5e7aa1fa332e2d69b8abbca962a2c4,PodSandboxId:5c8d4180070cac533a1c5986663ae2a8f67bb62de994ac84b837d4580da01292,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733789111448208569,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qklxb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bb029a1-abf9-4825-b9ec-0520a78cb3d8,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c119307a718a68531ce047e24f34ab358273d68db90f69181c4d18122412639e,PodSandboxId:2f0a05c488178f0ab5a198f121cc114fa7792a6f4c7860cd619fc0c963929221,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789100581038645,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36b46162662e8ae135696fdf8945315,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9280dbddeda2f1b7a46a964534d86646310bb4e84b9820f666730c6b111b08f6,PodSandboxId:c72f6f0dc52dd73dcbd327ba4546c4ee3ad1aad42f23613604bc2f31fe087951,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:173378910061119
1956,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9acb3bdc1edc385116fe4109f78c762a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7012c4523526373176e1bdb3740fe4c3f0da98aefab113c29ec3fd8fc86dd,PodSandboxId:0604e23adb4603bc51d0c42824765a240119b427d9680120e1c727a002283b77,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789100574603874,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d3ccc5731b295ff6bc2d83c07ce51c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63e80d74c90d945f31cb7bab277568619312b598dfb63dd17020118f61ed233,PodSandboxId:32cf03c03dbf5e86519da793fb2cb443fe863efefb7f3e9afd145c7dbd3000fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789100520822298,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ef24a05a34425fdec92ff0a9fd5bbfa,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a425c2d931ef8748e90cd391f2121a62c9cbbaf1a686b2b9f5f811110fffbf58,PodSandboxId:7da95a45dfd0f1e9baa3c1bc2763cb0bd944cabac77d851c0432c361b0139d70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733788815385624290,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9acb3bdc1edc385116fe4109f78c762a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c5ad5a0-d647-492e-b6df-6fb48058ac54 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.845458409Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2e2b3d15-5658-4d62-9899-df045e3f86e9 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.845562613Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e2b3d15-5658-4d62-9899-df045e3f86e9 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.847069096Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3bc44fac-3d6b-452a-b2c4-37c80f8bb283 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.847609461Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789660847584242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3bc44fac-3d6b-452a-b2c4-37c80f8bb283 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.848116108Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43e4bd02-96bf-4c06-bd31-21fff4378f0f name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.848205750Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43e4bd02-96bf-4c06-bd31-21fff4378f0f name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.848410277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97d81c851470ff8904f2a704c67542ee6316104d04ab675437a01ea6f07fbfd8,PodSandboxId:3e63d3f10ab367d7f1654c75a37a1da60094faf8cd712564e70d0d5dba1e0e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733789112587816469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55fe311f-4610-4805-9fb7-3f1cac7c96e6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b237f65b1f52cc0946de1dff121714fb7635a1fbe982c80c699291423b82ccf8,PodSandboxId:37db34926854c5a7226a215f95806d6c5ad4d2b2363dc7f29cd5e4ee4c161c10,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789112185958500,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-56djc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e25bad9-b88a-4ac8-a180-968bf6b057a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a0e82982a44e43e0ed41ee3568d34c43378472c4ad70abc0278c8546b5eab1,PodSandboxId:b3eaf6f8899f7f7c5723661565f19184e409f490d8d44b49681ddbfa54f074e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789112059041810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8rxx7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f
dfd41b-8ef7-41af-a703-ebecfe9ad319,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b333a8bf49675208e97a25fca7ababa5c5e7aa1fa332e2d69b8abbca962a2c4,PodSandboxId:5c8d4180070cac533a1c5986663ae2a8f67bb62de994ac84b837d4580da01292,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733789111448208569,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qklxb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bb029a1-abf9-4825-b9ec-0520a78cb3d8,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c119307a718a68531ce047e24f34ab358273d68db90f69181c4d18122412639e,PodSandboxId:2f0a05c488178f0ab5a198f121cc114fa7792a6f4c7860cd619fc0c963929221,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789100581038645,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36b46162662e8ae135696fdf8945315,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9280dbddeda2f1b7a46a964534d86646310bb4e84b9820f666730c6b111b08f6,PodSandboxId:c72f6f0dc52dd73dcbd327ba4546c4ee3ad1aad42f23613604bc2f31fe087951,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:173378910061119
1956,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9acb3bdc1edc385116fe4109f78c762a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7012c4523526373176e1bdb3740fe4c3f0da98aefab113c29ec3fd8fc86dd,PodSandboxId:0604e23adb4603bc51d0c42824765a240119b427d9680120e1c727a002283b77,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789100574603874,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d3ccc5731b295ff6bc2d83c07ce51c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63e80d74c90d945f31cb7bab277568619312b598dfb63dd17020118f61ed233,PodSandboxId:32cf03c03dbf5e86519da793fb2cb443fe863efefb7f3e9afd145c7dbd3000fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789100520822298,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ef24a05a34425fdec92ff0a9fd5bbfa,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a425c2d931ef8748e90cd391f2121a62c9cbbaf1a686b2b9f5f811110fffbf58,PodSandboxId:7da95a45dfd0f1e9baa3c1bc2763cb0bd944cabac77d851c0432c361b0139d70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733788815385624290,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9acb3bdc1edc385116fe4109f78c762a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43e4bd02-96bf-4c06-bd31-21fff4378f0f name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.888049675Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=995502fa-0ce8-43f4-b5dd-6bf44062919b name=/runtime.v1.RuntimeService/Version
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.888242556Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=995502fa-0ce8-43f4-b5dd-6bf44062919b name=/runtime.v1.RuntimeService/Version
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.889314914Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa4d3efc-a6a4-4daa-a125-a683c781333f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.889674210Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789660889651139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa4d3efc-a6a4-4daa-a125-a683c781333f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.890165410Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f048ee4-327a-48c0-86f4-9c41b9045dca name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.890301391Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f048ee4-327a-48c0-86f4-9c41b9045dca name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.890581813Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97d81c851470ff8904f2a704c67542ee6316104d04ab675437a01ea6f07fbfd8,PodSandboxId:3e63d3f10ab367d7f1654c75a37a1da60094faf8cd712564e70d0d5dba1e0e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733789112587816469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55fe311f-4610-4805-9fb7-3f1cac7c96e6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b237f65b1f52cc0946de1dff121714fb7635a1fbe982c80c699291423b82ccf8,PodSandboxId:37db34926854c5a7226a215f95806d6c5ad4d2b2363dc7f29cd5e4ee4c161c10,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789112185958500,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-56djc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e25bad9-b88a-4ac8-a180-968bf6b057a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a0e82982a44e43e0ed41ee3568d34c43378472c4ad70abc0278c8546b5eab1,PodSandboxId:b3eaf6f8899f7f7c5723661565f19184e409f490d8d44b49681ddbfa54f074e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789112059041810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8rxx7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f
dfd41b-8ef7-41af-a703-ebecfe9ad319,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b333a8bf49675208e97a25fca7ababa5c5e7aa1fa332e2d69b8abbca962a2c4,PodSandboxId:5c8d4180070cac533a1c5986663ae2a8f67bb62de994ac84b837d4580da01292,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733789111448208569,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qklxb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bb029a1-abf9-4825-b9ec-0520a78cb3d8,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c119307a718a68531ce047e24f34ab358273d68db90f69181c4d18122412639e,PodSandboxId:2f0a05c488178f0ab5a198f121cc114fa7792a6f4c7860cd619fc0c963929221,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789100581038645,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36b46162662e8ae135696fdf8945315,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9280dbddeda2f1b7a46a964534d86646310bb4e84b9820f666730c6b111b08f6,PodSandboxId:c72f6f0dc52dd73dcbd327ba4546c4ee3ad1aad42f23613604bc2f31fe087951,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:173378910061119
1956,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9acb3bdc1edc385116fe4109f78c762a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7012c4523526373176e1bdb3740fe4c3f0da98aefab113c29ec3fd8fc86dd,PodSandboxId:0604e23adb4603bc51d0c42824765a240119b427d9680120e1c727a002283b77,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789100574603874,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d3ccc5731b295ff6bc2d83c07ce51c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63e80d74c90d945f31cb7bab277568619312b598dfb63dd17020118f61ed233,PodSandboxId:32cf03c03dbf5e86519da793fb2cb443fe863efefb7f3e9afd145c7dbd3000fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789100520822298,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ef24a05a34425fdec92ff0a9fd5bbfa,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a425c2d931ef8748e90cd391f2121a62c9cbbaf1a686b2b9f5f811110fffbf58,PodSandboxId:7da95a45dfd0f1e9baa3c1bc2763cb0bd944cabac77d851c0432c361b0139d70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733788815385624290,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9acb3bdc1edc385116fe4109f78c762a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6f048ee4-327a-48c0-86f4-9c41b9045dca name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.932100444Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db5e24ee-6728-44a6-b486-ff28df6cf772 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.932293376Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db5e24ee-6728-44a6-b486-ff28df6cf772 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.933311668Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=768e7ba2-7778-410f-9a10-02b5a884716c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.933653038Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789660933629841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=768e7ba2-7778-410f-9a10-02b5a884716c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.934368886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1360019-b8df-44e1-8357-3303e6c3dec4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.934437055Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1360019-b8df-44e1-8357-3303e6c3dec4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.934693393Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97d81c851470ff8904f2a704c67542ee6316104d04ab675437a01ea6f07fbfd8,PodSandboxId:3e63d3f10ab367d7f1654c75a37a1da60094faf8cd712564e70d0d5dba1e0e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733789112587816469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55fe311f-4610-4805-9fb7-3f1cac7c96e6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b237f65b1f52cc0946de1dff121714fb7635a1fbe982c80c699291423b82ccf8,PodSandboxId:37db34926854c5a7226a215f95806d6c5ad4d2b2363dc7f29cd5e4ee4c161c10,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789112185958500,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-56djc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e25bad9-b88a-4ac8-a180-968bf6b057a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a0e82982a44e43e0ed41ee3568d34c43378472c4ad70abc0278c8546b5eab1,PodSandboxId:b3eaf6f8899f7f7c5723661565f19184e409f490d8d44b49681ddbfa54f074e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789112059041810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8rxx7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f
dfd41b-8ef7-41af-a703-ebecfe9ad319,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b333a8bf49675208e97a25fca7ababa5c5e7aa1fa332e2d69b8abbca962a2c4,PodSandboxId:5c8d4180070cac533a1c5986663ae2a8f67bb62de994ac84b837d4580da01292,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733789111448208569,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qklxb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bb029a1-abf9-4825-b9ec-0520a78cb3d8,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c119307a718a68531ce047e24f34ab358273d68db90f69181c4d18122412639e,PodSandboxId:2f0a05c488178f0ab5a198f121cc114fa7792a6f4c7860cd619fc0c963929221,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789100581038645,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36b46162662e8ae135696fdf8945315,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9280dbddeda2f1b7a46a964534d86646310bb4e84b9820f666730c6b111b08f6,PodSandboxId:c72f6f0dc52dd73dcbd327ba4546c4ee3ad1aad42f23613604bc2f31fe087951,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:173378910061119
1956,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9acb3bdc1edc385116fe4109f78c762a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7012c4523526373176e1bdb3740fe4c3f0da98aefab113c29ec3fd8fc86dd,PodSandboxId:0604e23adb4603bc51d0c42824765a240119b427d9680120e1c727a002283b77,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789100574603874,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d3ccc5731b295ff6bc2d83c07ce51c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63e80d74c90d945f31cb7bab277568619312b598dfb63dd17020118f61ed233,PodSandboxId:32cf03c03dbf5e86519da793fb2cb443fe863efefb7f3e9afd145c7dbd3000fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789100520822298,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ef24a05a34425fdec92ff0a9fd5bbfa,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a425c2d931ef8748e90cd391f2121a62c9cbbaf1a686b2b9f5f811110fffbf58,PodSandboxId:7da95a45dfd0f1e9baa3c1bc2763cb0bd944cabac77d851c0432c361b0139d70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733788815385624290,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9acb3bdc1edc385116fe4109f78c762a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1360019-b8df-44e1-8357-3303e6c3dec4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.937437075Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=ca15124f-2d5d-435a-8b0c-cbf653460371 name=/runtime.v1.RuntimeService/Status
	Dec 10 00:14:20 no-preload-048296 crio[708]: time="2024-12-10 00:14:20.937529584Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=ca15124f-2d5d-435a-8b0c-cbf653460371 name=/runtime.v1.RuntimeService/Status
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	97d81c851470f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   3e63d3f10ab36       storage-provisioner
	b237f65b1f52c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   37db34926854c       coredns-7c65d6cfc9-56djc
	94a0e82982a44       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   b3eaf6f8899f7       coredns-7c65d6cfc9-8rxx7
	7b333a8bf4967       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   5c8d4180070ca       kube-proxy-qklxb
	9280dbddeda2f       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   c72f6f0dc52dd       kube-apiserver-no-preload-048296
	c119307a718a6       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   2f0a05c488178       kube-controller-manager-no-preload-048296
	2ad7012c45235       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   0604e23adb460       etcd-no-preload-048296
	a63e80d74c90d       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   32cf03c03dbf5       kube-scheduler-no-preload-048296
	a425c2d931ef8       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   7da95a45dfd0f       kube-apiserver-no-preload-048296
	
	
	==> coredns [94a0e82982a44e43e0ed41ee3568d34c43378472c4ad70abc0278c8546b5eab1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [b237f65b1f52cc0946de1dff121714fb7635a1fbe982c80c699291423b82ccf8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-048296
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-048296
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=no-preload-048296
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_10T00_05_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:05:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-048296
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:14:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:10:22 +0000   Tue, 10 Dec 2024 00:05:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:10:22 +0000   Tue, 10 Dec 2024 00:05:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:10:22 +0000   Tue, 10 Dec 2024 00:05:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:10:22 +0000   Tue, 10 Dec 2024 00:05:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.182
	  Hostname:    no-preload-048296
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f1dc8b7771a4a5b875e807a54aba941
	  System UUID:                0f1dc8b7-771a-4a5b-875e-807a54aba941
	  Boot ID:                    a9df5da5-b4ac-4b82-9890-7eae9599cfa2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-56djc                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 coredns-7c65d6cfc9-8rxx7                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 etcd-no-preload-048296                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m16s
	  kube-system                 kube-apiserver-no-preload-048296             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-controller-manager-no-preload-048296    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-proxy-qklxb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 kube-scheduler-no-preload-048296             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 metrics-server-6867b74b74-n2f8c              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m9s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m8s   kube-proxy       
	  Normal  Starting                 9m16s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m16s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m16s  kubelet          Node no-preload-048296 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s  kubelet          Node no-preload-048296 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s  kubelet          Node no-preload-048296 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s  node-controller  Node no-preload-048296 event: Registered Node no-preload-048296 in Controller
	
	
	==> dmesg <==
	[  +0.039409] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.141120] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.947644] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.626267] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.443706] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.061690] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057629] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.167722] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.135919] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.250292] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[Dec10 00:00] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.070665] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.699818] systemd-fstab-generator[1427]: Ignoring "noauto" option for root device
	[  +2.679912] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.136189] kauditd_printk_skb: 53 callbacks suppressed
	[ +27.680489] kauditd_printk_skb: 32 callbacks suppressed
	[Dec10 00:04] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.776819] systemd-fstab-generator[3106]: Ignoring "noauto" option for root device
	[Dec10 00:05] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.468178] systemd-fstab-generator[3423]: Ignoring "noauto" option for root device
	[  +5.395530] systemd-fstab-generator[3549]: Ignoring "noauto" option for root device
	[  +0.105350] kauditd_printk_skb: 14 callbacks suppressed
	[Dec10 00:06] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [2ad7012c4523526373176e1bdb3740fe4c3f0da98aefab113c29ec3fd8fc86dd] <==
	{"level":"info","ts":"2024-12-10T00:05:01.001816Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-10T00:05:01.002782Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.182:2380"}
	{"level":"info","ts":"2024-12-10T00:05:01.003953Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.182:2380"}
	{"level":"info","ts":"2024-12-10T00:05:01.011087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"237a0a9f829d3d83 switched to configuration voters=(2556367418693598595)"}
	{"level":"info","ts":"2024-12-10T00:05:01.011200Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5f1ac972b1bdc8ed","local-member-id":"237a0a9f829d3d83","added-peer-id":"237a0a9f829d3d83","added-peer-peer-urls":["https://192.168.61.182:2380"]}
	{"level":"info","ts":"2024-12-10T00:05:01.813144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"237a0a9f829d3d83 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-10T00:05:01.813226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"237a0a9f829d3d83 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-10T00:05:01.813264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"237a0a9f829d3d83 received MsgPreVoteResp from 237a0a9f829d3d83 at term 1"}
	{"level":"info","ts":"2024-12-10T00:05:01.813277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"237a0a9f829d3d83 became candidate at term 2"}
	{"level":"info","ts":"2024-12-10T00:05:01.813283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"237a0a9f829d3d83 received MsgVoteResp from 237a0a9f829d3d83 at term 2"}
	{"level":"info","ts":"2024-12-10T00:05:01.813299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"237a0a9f829d3d83 became leader at term 2"}
	{"level":"info","ts":"2024-12-10T00:05:01.813306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 237a0a9f829d3d83 elected leader 237a0a9f829d3d83 at term 2"}
	{"level":"info","ts":"2024-12-10T00:05:01.814523Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"237a0a9f829d3d83","local-member-attributes":"{Name:no-preload-048296 ClientURLs:[https://192.168.61.182:2379]}","request-path":"/0/members/237a0a9f829d3d83/attributes","cluster-id":"5f1ac972b1bdc8ed","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-10T00:05:01.814624Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T00:05:01.814686Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T00:05:01.815099Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:05:01.816181Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T00:05:01.817621Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T00:05:01.818173Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-10T00:05:01.820145Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-10T00:05:01.820915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-10T00:05:01.821484Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.182:2379"}
	{"level":"info","ts":"2024-12-10T00:05:01.821734Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5f1ac972b1bdc8ed","local-member-id":"237a0a9f829d3d83","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:05:01.827989Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:05:01.828041Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 00:14:21 up 14 min,  0 users,  load average: 0.05, 0.13, 0.10
	Linux no-preload-048296 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9280dbddeda2f1b7a46a964534d86646310bb4e84b9820f666730c6b111b08f6] <==
	W1210 00:10:04.179249       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:10:04.179443       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 00:10:04.180503       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 00:10:04.180798       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 00:11:04.180937       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:11:04.181244       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1210 00:11:04.181326       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:11:04.181375       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 00:11:04.183164       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 00:11:04.183195       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 00:13:04.183958       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:13:04.184082       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1210 00:13:04.184179       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:13:04.184282       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 00:13:04.185230       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 00:13:04.186417       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [a425c2d931ef8748e90cd391f2121a62c9cbbaf1a686b2b9f5f811110fffbf58] <==
	W1210 00:04:54.970227       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:54.986188       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.032527       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.038141       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.098727       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.112732       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.136217       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.159225       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.160487       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.350574       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.370158       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.376537       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.416096       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.424705       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.440513       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.443109       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.449507       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.571442       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.577803       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.662231       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.698343       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.699791       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.818961       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:56.001537       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:56.110688       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [c119307a718a68531ce047e24f34ab358273d68db90f69181c4d18122412639e] <==
	E1210 00:09:10.134285       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:09:10.567188       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:09:40.139546       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:09:40.574602       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:10:10.145325       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:10:10.583617       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 00:10:22.557686       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-048296"
	E1210 00:10:40.151219       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:10:40.591361       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:11:10.157966       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:11:10.598589       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 00:11:11.753737       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="92.461µs"
	I1210 00:11:23.759723       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="269.265µs"
	E1210 00:11:40.163606       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:11:40.605724       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:12:10.169745       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:12:10.612616       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:12:40.176269       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:12:40.619573       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:13:10.182200       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:13:10.626431       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:13:40.187496       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:13:40.633938       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:14:10.195185       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:14:10.641400       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [7b333a8bf49675208e97a25fca7ababa5c5e7aa1fa332e2d69b8abbca962a2c4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1210 00:05:11.990681       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1210 00:05:12.023994       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.182"]
	E1210 00:05:12.024121       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 00:05:12.431239       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1210 00:05:12.431307       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 00:05:12.431333       1 server_linux.go:169] "Using iptables Proxier"
	I1210 00:05:12.439480       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 00:05:12.439688       1 server.go:483] "Version info" version="v1.31.2"
	I1210 00:05:12.439720       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 00:05:12.465079       1 config.go:199] "Starting service config controller"
	I1210 00:05:12.465113       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1210 00:05:12.465144       1 config.go:105] "Starting endpoint slice config controller"
	I1210 00:05:12.465148       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1210 00:05:12.465530       1 config.go:328] "Starting node config controller"
	I1210 00:05:12.465557       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1210 00:05:12.566263       1 shared_informer.go:320] Caches are synced for node config
	I1210 00:05:12.566272       1 shared_informer.go:320] Caches are synced for service config
	I1210 00:05:12.566324       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a63e80d74c90d945f31cb7bab277568619312b598dfb63dd17020118f61ed233] <==
	W1210 00:05:03.222985       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1210 00:05:03.223024       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:03.223091       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1210 00:05:03.223119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:03.223375       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 00:05:03.223459       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:03.226074       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1210 00:05:03.226113       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:04.041310       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1210 00:05:04.041373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:04.088034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1210 00:05:04.088096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:04.190450       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1210 00:05:04.190494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:04.263377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 00:05:04.263426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:04.365287       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1210 00:05:04.365340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:04.370243       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1210 00:05:04.370289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:04.441548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1210 00:05:04.441687       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:04.466720       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1210 00:05:04.466800       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1210 00:05:07.682567       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 00:13:09 no-preload-048296 kubelet[3430]: E1210 00:13:09.741743    3430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n2f8c" podUID="8e9f56c9-fd67-4715-9148-1255be17f1fe"
	Dec 10 00:13:15 no-preload-048296 kubelet[3430]: E1210 00:13:15.895291    3430 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789595894933713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:13:15 no-preload-048296 kubelet[3430]: E1210 00:13:15.895564    3430 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789595894933713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:13:24 no-preload-048296 kubelet[3430]: E1210 00:13:24.740960    3430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n2f8c" podUID="8e9f56c9-fd67-4715-9148-1255be17f1fe"
	Dec 10 00:13:25 no-preload-048296 kubelet[3430]: E1210 00:13:25.898154    3430 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789605897639905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:13:25 no-preload-048296 kubelet[3430]: E1210 00:13:25.898199    3430 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789605897639905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:13:35 no-preload-048296 kubelet[3430]: E1210 00:13:35.899707    3430 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789615899342899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:13:35 no-preload-048296 kubelet[3430]: E1210 00:13:35.899772    3430 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789615899342899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:13:39 no-preload-048296 kubelet[3430]: E1210 00:13:39.741524    3430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n2f8c" podUID="8e9f56c9-fd67-4715-9148-1255be17f1fe"
	Dec 10 00:13:45 no-preload-048296 kubelet[3430]: E1210 00:13:45.901602    3430 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789625901362018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:13:45 no-preload-048296 kubelet[3430]: E1210 00:13:45.901627    3430 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789625901362018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:13:54 no-preload-048296 kubelet[3430]: E1210 00:13:54.741506    3430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n2f8c" podUID="8e9f56c9-fd67-4715-9148-1255be17f1fe"
	Dec 10 00:13:55 no-preload-048296 kubelet[3430]: E1210 00:13:55.902905    3430 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789635902583717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:13:55 no-preload-048296 kubelet[3430]: E1210 00:13:55.902933    3430 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789635902583717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:14:05 no-preload-048296 kubelet[3430]: E1210 00:14:05.742306    3430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n2f8c" podUID="8e9f56c9-fd67-4715-9148-1255be17f1fe"
	Dec 10 00:14:05 no-preload-048296 kubelet[3430]: E1210 00:14:05.755294    3430 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 10 00:14:05 no-preload-048296 kubelet[3430]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 10 00:14:05 no-preload-048296 kubelet[3430]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 10 00:14:05 no-preload-048296 kubelet[3430]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 00:14:05 no-preload-048296 kubelet[3430]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 00:14:05 no-preload-048296 kubelet[3430]: E1210 00:14:05.904771    3430 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789645904392682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:14:05 no-preload-048296 kubelet[3430]: E1210 00:14:05.904814    3430 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789645904392682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:14:15 no-preload-048296 kubelet[3430]: E1210 00:14:15.906732    3430 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789655906408248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:14:15 no-preload-048296 kubelet[3430]: E1210 00:14:15.906772    3430 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789655906408248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:14:20 no-preload-048296 kubelet[3430]: E1210 00:14:20.741022    3430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n2f8c" podUID="8e9f56c9-fd67-4715-9148-1255be17f1fe"
	
	
	==> storage-provisioner [97d81c851470ff8904f2a704c67542ee6316104d04ab675437a01ea6f07fbfd8] <==
	I1210 00:05:12.693725       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 00:05:12.713369       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 00:05:12.713514       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 00:05:12.725931       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 00:05:12.726255       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-048296_dbf8be4c-2ead-454a-9e06-4ce0104bad17!
	I1210 00:05:12.728292       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"90ea2f99-6d89-41c8-bb4b-6fa2ca14b65a", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-048296_dbf8be4c-2ead-454a-9e06-4ce0104bad17 became leader
	I1210 00:05:12.827570       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-048296_dbf8be4c-2ead-454a-9e06-4ce0104bad17!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-048296 -n no-preload-048296
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-048296 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-n2f8c
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-048296 describe pod metrics-server-6867b74b74-n2f8c
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-048296 describe pod metrics-server-6867b74b74-n2f8c: exit status 1 (65.991288ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-n2f8c" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-048296 describe pod metrics-server-6867b74b74-n2f8c: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:07:56.169159   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:08:00.639347   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:08:12.522983   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:08:28.185855   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:08:29.410214   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:08:38.331224   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:09:20.544614   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:09:23.701805   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:09:51.250503   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:10:01.397258   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:10:11.198672   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:10:26.332765   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:10:43.608837   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:10:48.761774   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:11:33.103133   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:11:34.262137   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:13:00.639712   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:13:12.522728   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:13:28.186607   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:13:38.331236   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:14:20.544339   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:15:11.198882   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:15:26.333432   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:15:48.762673   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:16:15.598980   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:16:33.103302   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-720064 -n old-k8s-version-720064
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-720064 -n old-k8s-version-720064: exit status 2 (229.814095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-720064" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-720064 -n old-k8s-version-720064
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-720064 -n old-k8s-version-720064: exit status 2 (235.125361ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-720064 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-720064 logs -n 25: (1.495968382s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-030585 sudo cat                              | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo find                             | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo crio                             | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-030585                                       | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-866797 | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | disable-driver-mounts-866797                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:52 UTC |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-048296             | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC | 09 Dec 24 23:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-825613            | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC | 09 Dec 24 23:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-825613                                  | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-871210  | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:52 UTC | 09 Dec 24 23:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:52 UTC |                     |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-720064        | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-048296                  | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-825613                 | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC | 10 Dec 24 00:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-825613                                  | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC | 10 Dec 24 00:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-871210       | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:54 UTC | 10 Dec 24 00:04 UTC |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-720064             | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:55:25
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:55:25.509412   84547 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:55:25.509516   84547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:25.509524   84547 out.go:358] Setting ErrFile to fd 2...
	I1209 23:55:25.509541   84547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:25.509720   84547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 23:55:25.510225   84547 out.go:352] Setting JSON to false
	I1209 23:55:25.511117   84547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9476,"bootTime":1733779049,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:55:25.511206   84547 start.go:139] virtualization: kvm guest
	I1209 23:55:25.513214   84547 out.go:177] * [old-k8s-version-720064] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:55:25.514823   84547 notify.go:220] Checking for updates...
	I1209 23:55:25.514845   84547 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:55:25.516067   84547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:55:25.517350   84547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:55:25.518678   84547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 23:55:25.520082   84547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:55:25.521432   84547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:55:25.523092   84547 config.go:182] Loaded profile config "old-k8s-version-720064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 23:55:25.523499   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:55:25.523548   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:55:25.538209   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36805
	I1209 23:55:25.538728   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:55:25.539263   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:55:25.539282   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:55:25.539570   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:55:25.539735   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:55:25.541550   84547 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1209 23:55:25.542864   84547 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:55:25.543159   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:55:25.543196   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:55:25.558672   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I1209 23:55:25.559075   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:55:25.559524   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:55:25.559547   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:55:25.559881   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:55:25.560082   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:55:25.595916   84547 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 23:55:25.597365   84547 start.go:297] selected driver: kvm2
	I1209 23:55:25.597380   84547 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:55:25.597473   84547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:55:25.598182   84547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:55:25.598251   84547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 23:55:25.612993   84547 install.go:137] /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1209 23:55:25.613401   84547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:55:25.613430   84547 cni.go:84] Creating CNI manager for ""
	I1209 23:55:25.613473   84547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:55:25.613509   84547 start.go:340] cluster config:
	{Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:55:25.613608   84547 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:55:25.615371   84547 out.go:177] * Starting "old-k8s-version-720064" primary control-plane node in "old-k8s-version-720064" cluster
	I1209 23:55:25.371778   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:25.616599   84547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:55:25.616629   84547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1209 23:55:25.616636   84547 cache.go:56] Caching tarball of preloaded images
	I1209 23:55:25.616716   84547 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 23:55:25.616727   84547 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1209 23:55:25.616809   84547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/config.json ...
	I1209 23:55:25.616984   84547 start.go:360] acquireMachinesLock for old-k8s-version-720064: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:55:28.439810   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:34.519811   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:37.591794   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:43.671858   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:46.743891   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:52.823826   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:55.895854   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:01.975845   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:05.047862   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:11.127840   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:14.199834   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:20.279853   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:23.351850   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:29.431829   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:32.503859   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:38.583822   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:41.655831   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:47.735829   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:50.807862   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:56.887827   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:59.959820   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:06.039852   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:09.111843   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:15.191823   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:18.263824   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:24.343852   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:27.415807   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:33.495843   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:36.567854   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:42.647854   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:45.719902   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:51.799842   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:54.871862   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:00.951877   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:04.023859   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:10.103869   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:13.175788   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:19.255805   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:22.327784   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:28.407842   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:31.479864   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:34.483630   83900 start.go:364] duration metric: took 4m35.142298634s to acquireMachinesLock for "embed-certs-825613"
	I1209 23:58:34.483690   83900 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:58:34.483698   83900 fix.go:54] fixHost starting: 
	I1209 23:58:34.484038   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:58:34.484080   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:58:34.499267   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I1209 23:58:34.499717   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:58:34.500208   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:58:34.500236   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:58:34.500552   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:58:34.500747   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:34.500886   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:58:34.502318   83900 fix.go:112] recreateIfNeeded on embed-certs-825613: state=Stopped err=<nil>
	I1209 23:58:34.502340   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	W1209 23:58:34.502480   83900 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:58:34.504290   83900 out.go:177] * Restarting existing kvm2 VM for "embed-certs-825613" ...
	I1209 23:58:34.481505   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:58:34.481545   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:58:34.481889   83859 buildroot.go:166] provisioning hostname "no-preload-048296"
	I1209 23:58:34.481917   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:58:34.482120   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:58:34.483492   83859 machine.go:96] duration metric: took 4m37.418278223s to provisionDockerMachine
	I1209 23:58:34.483534   83859 fix.go:56] duration metric: took 4m37.439725581s for fixHost
	I1209 23:58:34.483542   83859 start.go:83] releasing machines lock for "no-preload-048296", held for 4m37.439753726s
	W1209 23:58:34.483579   83859 start.go:714] error starting host: provision: host is not running
	W1209 23:58:34.483682   83859 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1209 23:58:34.483694   83859 start.go:729] Will try again in 5 seconds ...
	I1209 23:58:34.505520   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Start
	I1209 23:58:34.505676   83900 main.go:141] libmachine: (embed-certs-825613) Ensuring networks are active...
	I1209 23:58:34.506381   83900 main.go:141] libmachine: (embed-certs-825613) Ensuring network default is active
	I1209 23:58:34.506676   83900 main.go:141] libmachine: (embed-certs-825613) Ensuring network mk-embed-certs-825613 is active
	I1209 23:58:34.507046   83900 main.go:141] libmachine: (embed-certs-825613) Getting domain xml...
	I1209 23:58:34.507727   83900 main.go:141] libmachine: (embed-certs-825613) Creating domain...
	I1209 23:58:35.719784   83900 main.go:141] libmachine: (embed-certs-825613) Waiting to get IP...
	I1209 23:58:35.720797   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:35.721325   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:35.721377   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:35.721289   85186 retry.go:31] will retry after 251.225165ms: waiting for machine to come up
	I1209 23:58:35.973801   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:35.974247   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:35.974276   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:35.974205   85186 retry.go:31] will retry after 298.029679ms: waiting for machine to come up
	I1209 23:58:36.273658   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:36.274090   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:36.274119   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:36.274049   85186 retry.go:31] will retry after 467.217404ms: waiting for machine to come up
	I1209 23:58:36.742591   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:36.743027   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:36.743052   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:36.742982   85186 retry.go:31] will retry after 409.741731ms: waiting for machine to come up
	I1209 23:58:37.154543   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:37.154958   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:37.154982   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:37.154916   85186 retry.go:31] will retry after 723.02759ms: waiting for machine to come up
	I1209 23:58:37.878986   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:37.879474   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:37.879507   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:37.879423   85186 retry.go:31] will retry after 767.399861ms: waiting for machine to come up
	I1209 23:58:38.648416   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:38.648903   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:38.648927   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:38.648853   85186 retry.go:31] will retry after 1.06423499s: waiting for machine to come up
	I1209 23:58:39.485363   83859 start.go:360] acquireMachinesLock for no-preload-048296: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:58:39.714711   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:39.715114   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:39.715147   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:39.715058   85186 retry.go:31] will retry after 1.402884688s: waiting for machine to come up
	I1209 23:58:41.119828   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:41.120315   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:41.120359   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:41.120258   85186 retry.go:31] will retry after 1.339874475s: waiting for machine to come up
	I1209 23:58:42.461858   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:42.462314   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:42.462343   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:42.462261   85186 retry.go:31] will retry after 1.455809273s: waiting for machine to come up
	I1209 23:58:43.920097   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:43.920418   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:43.920448   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:43.920371   85186 retry.go:31] will retry after 2.872969061s: waiting for machine to come up
	I1209 23:58:46.796163   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:46.796477   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:46.796499   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:46.796439   85186 retry.go:31] will retry after 2.530677373s: waiting for machine to come up
	I1209 23:58:49.330146   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:49.330601   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:49.330631   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:49.330561   85186 retry.go:31] will retry after 4.492372507s: waiting for machine to come up
	I1209 23:58:53.827982   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.828461   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has current primary IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.828480   83900 main.go:141] libmachine: (embed-certs-825613) Found IP for machine: 192.168.50.19
	I1209 23:58:53.828492   83900 main.go:141] libmachine: (embed-certs-825613) Reserving static IP address...
	I1209 23:58:53.829001   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "embed-certs-825613", mac: "52:54:00:0f:2e:da", ip: "192.168.50.19"} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.829028   83900 main.go:141] libmachine: (embed-certs-825613) Reserved static IP address: 192.168.50.19
	I1209 23:58:53.829051   83900 main.go:141] libmachine: (embed-certs-825613) DBG | skip adding static IP to network mk-embed-certs-825613 - found existing host DHCP lease matching {name: "embed-certs-825613", mac: "52:54:00:0f:2e:da", ip: "192.168.50.19"}
	I1209 23:58:53.829067   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Getting to WaitForSSH function...
	I1209 23:58:53.829080   83900 main.go:141] libmachine: (embed-certs-825613) Waiting for SSH to be available...
	I1209 23:58:53.831079   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.831430   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.831462   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.831630   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Using SSH client type: external
	I1209 23:58:53.831682   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa (-rw-------)
	I1209 23:58:53.831723   83900 main.go:141] libmachine: (embed-certs-825613) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:58:53.831752   83900 main.go:141] libmachine: (embed-certs-825613) DBG | About to run SSH command:
	I1209 23:58:53.831765   83900 main.go:141] libmachine: (embed-certs-825613) DBG | exit 0
	I1209 23:58:53.959446   83900 main.go:141] libmachine: (embed-certs-825613) DBG | SSH cmd err, output: <nil>: 
	I1209 23:58:53.959864   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetConfigRaw
	I1209 23:58:53.960515   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:53.963227   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.963614   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.963644   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.963889   83900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/config.json ...
	I1209 23:58:53.964086   83900 machine.go:93] provisionDockerMachine start ...
	I1209 23:58:53.964103   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:53.964299   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:53.966516   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.966803   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.966834   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.966959   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:53.967149   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:53.967292   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:53.967428   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:53.967599   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:53.967824   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:53.967839   83900 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:58:54.079701   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:58:54.079732   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetMachineName
	I1209 23:58:54.080045   83900 buildroot.go:166] provisioning hostname "embed-certs-825613"
	I1209 23:58:54.080079   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetMachineName
	I1209 23:58:54.080333   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.082930   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.083283   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.083321   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.083420   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.083657   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.083834   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.083974   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.084095   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.084269   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.084282   83900 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-825613 && echo "embed-certs-825613" | sudo tee /etc/hostname
	I1209 23:58:54.208724   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-825613
	
	I1209 23:58:54.208754   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.211739   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.212095   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.212122   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.212297   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.212513   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.212763   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.212936   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.213102   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.213285   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.213311   83900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-825613' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-825613/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-825613' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:58:55.043989   84259 start.go:364] duration metric: took 4m2.194304919s to acquireMachinesLock for "default-k8s-diff-port-871210"
	I1209 23:58:55.044059   84259 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:58:55.044067   84259 fix.go:54] fixHost starting: 
	I1209 23:58:55.044480   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:58:55.044546   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:58:55.060908   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45091
	I1209 23:58:55.061391   84259 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:58:55.061954   84259 main.go:141] libmachine: Using API Version  1
	I1209 23:58:55.061982   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:58:55.062346   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:58:55.062508   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:58:55.062674   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1209 23:58:55.064365   84259 fix.go:112] recreateIfNeeded on default-k8s-diff-port-871210: state=Stopped err=<nil>
	I1209 23:58:55.064386   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	W1209 23:58:55.064564   84259 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:58:55.066707   84259 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-871210" ...
	I1209 23:58:54.331911   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:58:54.331941   83900 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:58:54.331966   83900 buildroot.go:174] setting up certificates
	I1209 23:58:54.331978   83900 provision.go:84] configureAuth start
	I1209 23:58:54.331991   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetMachineName
	I1209 23:58:54.332275   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:54.334597   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.334926   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.334957   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.335117   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.337393   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.337731   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.337763   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.337876   83900 provision.go:143] copyHostCerts
	I1209 23:58:54.337923   83900 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:58:54.337936   83900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:58:54.338006   83900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:58:54.338093   83900 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:58:54.338101   83900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:58:54.338127   83900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:58:54.338204   83900 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:58:54.338212   83900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:58:54.338233   83900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:58:54.338286   83900 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.embed-certs-825613 san=[127.0.0.1 192.168.50.19 embed-certs-825613 localhost minikube]
	I1209 23:58:54.423610   83900 provision.go:177] copyRemoteCerts
	I1209 23:58:54.423679   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:58:54.423706   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.426695   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.427009   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.427041   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.427144   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.427332   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.427516   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.427706   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:54.513991   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 23:58:54.536700   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:58:54.558541   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:58:54.580319   83900 provision.go:87] duration metric: took 248.326924ms to configureAuth
	I1209 23:58:54.580359   83900 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:58:54.580568   83900 config.go:182] Loaded profile config "embed-certs-825613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:58:54.580652   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.583347   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.583720   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.583748   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.583963   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.584171   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.584327   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.584481   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.584621   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.584775   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.584790   83900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:58:54.804814   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:58:54.804860   83900 machine.go:96] duration metric: took 840.759848ms to provisionDockerMachine
	I1209 23:58:54.804876   83900 start.go:293] postStartSetup for "embed-certs-825613" (driver="kvm2")
	I1209 23:58:54.804891   83900 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:58:54.804916   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:54.805269   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:58:54.805297   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.807977   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.808340   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.808370   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.808505   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.808765   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.808945   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.809100   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:54.893741   83900 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:58:54.897936   83900 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:58:54.897961   83900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:58:54.898065   83900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:58:54.898145   83900 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:58:54.898235   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:58:54.907056   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:58:54.929618   83900 start.go:296] duration metric: took 124.726532ms for postStartSetup
	I1209 23:58:54.929664   83900 fix.go:56] duration metric: took 20.445966428s for fixHost
	I1209 23:58:54.929711   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.932476   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.932866   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.932893   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.933108   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.933334   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.933523   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.933638   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.933747   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.933956   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.933967   83900 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:58:55.043841   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788735.012193373
	
	I1209 23:58:55.043863   83900 fix.go:216] guest clock: 1733788735.012193373
	I1209 23:58:55.043870   83900 fix.go:229] Guest: 2024-12-09 23:58:55.012193373 +0000 UTC Remote: 2024-12-09 23:58:54.929689658 +0000 UTC m=+295.728685701 (delta=82.503715ms)
	I1209 23:58:55.043888   83900 fix.go:200] guest clock delta is within tolerance: 82.503715ms
	I1209 23:58:55.043892   83900 start.go:83] releasing machines lock for "embed-certs-825613", held for 20.560223511s
	I1209 23:58:55.043915   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.044198   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:55.046852   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.047246   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:55.047283   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.047446   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.047910   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.048101   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.048198   83900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:58:55.048235   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:55.048424   83900 ssh_runner.go:195] Run: cat /version.json
	I1209 23:58:55.048453   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:55.050905   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051274   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:55.051317   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051339   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051502   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:55.051721   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:55.051772   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:55.051794   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051925   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:55.051980   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:55.052091   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:55.052133   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:55.052263   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:55.052407   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:55.136384   83900 ssh_runner.go:195] Run: systemctl --version
	I1209 23:58:55.154927   83900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:58:55.297639   83900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:58:55.303733   83900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:58:55.303791   83900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:58:55.323858   83900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:58:55.323884   83900 start.go:495] detecting cgroup driver to use...
	I1209 23:58:55.323955   83900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:58:55.342390   83900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:58:55.360400   83900 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:58:55.360463   83900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:58:55.374005   83900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:58:55.387515   83900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:58:55.507866   83900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:58:55.676259   83900 docker.go:233] disabling docker service ...
	I1209 23:58:55.676338   83900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:58:55.695273   83900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:58:55.707909   83900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:58:55.824683   83900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:58:55.934080   83900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:58:55.950700   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:58:55.967756   83900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:58:55.967813   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:55.978028   83900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:58:55.978102   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:55.988960   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:55.999661   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.010253   83900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:58:56.020928   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.030944   83900 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.050251   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.062723   83900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:58:56.072435   83900 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:58:56.072501   83900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:58:56.085332   83900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:58:56.095538   83900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:58:56.214133   83900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:58:56.310023   83900 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:58:56.310107   83900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:58:56.314988   83900 start.go:563] Will wait 60s for crictl version
	I1209 23:58:56.315057   83900 ssh_runner.go:195] Run: which crictl
	I1209 23:58:56.318865   83900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:58:56.356889   83900 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:58:56.356996   83900 ssh_runner.go:195] Run: crio --version
	I1209 23:58:56.387128   83900 ssh_runner.go:195] Run: crio --version
	I1209 23:58:56.417781   83900 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:58:55.068139   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Start
	I1209 23:58:55.068329   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Ensuring networks are active...
	I1209 23:58:55.069026   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Ensuring network default is active
	I1209 23:58:55.069526   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Ensuring network mk-default-k8s-diff-port-871210 is active
	I1209 23:58:55.069970   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Getting domain xml...
	I1209 23:58:55.070725   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Creating domain...
	I1209 23:58:56.366161   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting to get IP...
	I1209 23:58:56.367215   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.367659   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.367720   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:56.367639   85338 retry.go:31] will retry after 212.733452ms: waiting for machine to come up
	I1209 23:58:56.582400   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.582945   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.582973   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:56.582895   85338 retry.go:31] will retry after 380.03081ms: waiting for machine to come up
	I1209 23:58:56.964721   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.965265   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.965296   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:56.965208   85338 retry.go:31] will retry after 429.612511ms: waiting for machine to come up
	I1209 23:58:57.396713   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.397267   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.397298   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:57.397236   85338 retry.go:31] will retry after 595.581233ms: waiting for machine to come up
	I1209 23:58:56.418967   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:56.422317   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:56.422632   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:56.422656   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:56.422925   83900 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1209 23:58:56.426992   83900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:58:56.440436   83900 kubeadm.go:883] updating cluster {Name:embed-certs-825613 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-825613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:58:56.440548   83900 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:58:56.440592   83900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:58:56.475823   83900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:58:56.475907   83900 ssh_runner.go:195] Run: which lz4
	I1209 23:58:56.479747   83900 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:58:56.483850   83900 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:58:56.483900   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 23:58:57.820979   83900 crio.go:462] duration metric: took 1.341265764s to copy over tarball
	I1209 23:58:57.821091   83900 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:58:57.994077   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.994566   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.994590   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:57.994526   85338 retry.go:31] will retry after 728.981009ms: waiting for machine to come up
	I1209 23:58:58.725707   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:58.726209   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:58.726252   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:58.726125   85338 retry.go:31] will retry after 701.836089ms: waiting for machine to come up
	I1209 23:58:59.429804   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:59.430322   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:59.430350   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:59.430266   85338 retry.go:31] will retry after 735.800538ms: waiting for machine to come up
	I1209 23:59:00.167774   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:00.168363   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:00.168397   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:00.168319   85338 retry.go:31] will retry after 1.052511845s: waiting for machine to come up
	I1209 23:59:01.222463   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:01.222960   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:01.222991   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:01.222919   85338 retry.go:31] will retry after 1.655880765s: waiting for machine to come up
	I1209 23:58:59.925304   83900 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.104178296s)
	I1209 23:58:59.925338   83900 crio.go:469] duration metric: took 2.104325305s to extract the tarball
	I1209 23:58:59.925348   83900 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:58:59.962716   83900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:00.008762   83900 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:59:00.008793   83900 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:59:00.008803   83900 kubeadm.go:934] updating node { 192.168.50.19 8443 v1.31.2 crio true true} ...
	I1209 23:59:00.008929   83900 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-825613 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-825613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:59:00.009008   83900 ssh_runner.go:195] Run: crio config
	I1209 23:59:00.050777   83900 cni.go:84] Creating CNI manager for ""
	I1209 23:59:00.050801   83900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:00.050813   83900 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:59:00.050842   83900 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.19 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-825613 NodeName:embed-certs-825613 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:59:00.051001   83900 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-825613"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.19"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.19"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:59:00.051083   83900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:59:00.062278   83900 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:59:00.062354   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:59:00.073157   83900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1209 23:59:00.088933   83900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:59:00.104358   83900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1209 23:59:00.122425   83900 ssh_runner.go:195] Run: grep 192.168.50.19	control-plane.minikube.internal$ /etc/hosts
	I1209 23:59:00.126224   83900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:00.138974   83900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:00.252489   83900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:00.269127   83900 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613 for IP: 192.168.50.19
	I1209 23:59:00.269155   83900 certs.go:194] generating shared ca certs ...
	I1209 23:59:00.269177   83900 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:00.269359   83900 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:59:00.269406   83900 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:59:00.269421   83900 certs.go:256] generating profile certs ...
	I1209 23:59:00.269551   83900 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/client.key
	I1209 23:59:00.269651   83900 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/apiserver.key.12f830e7
	I1209 23:59:00.269724   83900 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/proxy-client.key
	I1209 23:59:00.269901   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:59:00.269954   83900 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:59:00.269968   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:59:00.270007   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:59:00.270056   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:59:00.270096   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:59:00.270161   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:00.271078   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:59:00.322392   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:59:00.351665   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:59:00.378615   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:59:00.401622   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1209 23:59:00.430280   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 23:59:00.453489   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:59:00.478368   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:59:00.501404   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:59:00.524273   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:59:00.546132   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:59:00.568553   83900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:59:00.584196   83900 ssh_runner.go:195] Run: openssl version
	I1209 23:59:00.589905   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:59:00.600107   83900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:59:00.604436   83900 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:59:00.604504   83900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:59:00.609960   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:59:00.620129   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:59:00.629895   83900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:00.633993   83900 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:00.634053   83900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:00.639538   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:59:00.649781   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:59:00.659593   83900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:59:00.663789   83900 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:59:00.663832   83900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:59:00.669033   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:59:00.678881   83900 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:59:00.683006   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:59:00.688631   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:59:00.694212   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:59:00.699930   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:59:00.705400   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:59:00.710668   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:59:00.715982   83900 kubeadm.go:392] StartCluster: {Name:embed-certs-825613 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-825613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:00.716059   83900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:59:00.716094   83900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:00.758093   83900 cri.go:89] found id: ""
	I1209 23:59:00.758164   83900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:59:00.767877   83900 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:59:00.767895   83900 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:59:00.767932   83900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:59:00.777172   83900 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:59:00.778578   83900 kubeconfig.go:125] found "embed-certs-825613" server: "https://192.168.50.19:8443"
	I1209 23:59:00.780691   83900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:59:00.790459   83900 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.19
	I1209 23:59:00.790492   83900 kubeadm.go:1160] stopping kube-system containers ...
	I1209 23:59:00.790507   83900 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 23:59:00.790563   83900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:00.831048   83900 cri.go:89] found id: ""
	I1209 23:59:00.831129   83900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 23:59:00.847797   83900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:59:00.857819   83900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:59:00.857841   83900 kubeadm.go:157] found existing configuration files:
	
	I1209 23:59:00.857877   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:59:00.867108   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:59:00.867159   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:59:00.876742   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:59:00.885861   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:59:00.885942   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:59:00.895651   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:59:00.904027   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:59:00.904092   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:59:00.912956   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:59:00.921647   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:59:00.921704   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:59:00.930445   83900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:59:00.939496   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:01.049561   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.012953   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.234060   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.302650   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.378572   83900 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:59:02.378677   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:02.878755   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:03.379050   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:03.879215   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:02.880033   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:02.880532   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:02.880560   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:02.880488   85338 retry.go:31] will retry after 2.112793708s: waiting for machine to come up
	I1209 23:59:04.994713   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:04.995223   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:04.995254   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:04.995183   85338 retry.go:31] will retry after 2.420394988s: waiting for machine to come up
	I1209 23:59:07.416846   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:07.417309   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:07.417338   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:07.417281   85338 retry.go:31] will retry after 2.479420656s: waiting for machine to come up
	I1209 23:59:04.379735   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:04.406446   83900 api_server.go:72] duration metric: took 2.027872494s to wait for apiserver process to appear ...
	I1209 23:59:04.406480   83900 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:59:04.406506   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:04.407056   83900 api_server.go:269] stopped: https://192.168.50.19:8443/healthz: Get "https://192.168.50.19:8443/healthz": dial tcp 192.168.50.19:8443: connect: connection refused
	I1209 23:59:04.907381   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:06.967755   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:06.967791   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:06.967808   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:07.003261   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:07.003295   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:07.406692   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:07.411139   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:07.411168   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:07.906689   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:07.911136   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:07.911176   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:08.406697   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:08.411051   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 200:
	ok
	I1209 23:59:08.422472   83900 api_server.go:141] control plane version: v1.31.2
	I1209 23:59:08.422506   83900 api_server.go:131] duration metric: took 4.01601823s to wait for apiserver health ...
	I1209 23:59:08.422532   83900 cni.go:84] Creating CNI manager for ""
	I1209 23:59:08.422541   83900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:08.424238   83900 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 23:59:08.425539   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 23:59:08.435424   83900 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 23:59:08.451320   83900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:59:08.461244   83900 system_pods.go:59] 8 kube-system pods found
	I1209 23:59:08.461285   83900 system_pods.go:61] "coredns-7c65d6cfc9-qvtlr" [1ef5707d-5f06-46d0-809b-da79f79c49e5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 23:59:08.461296   83900 system_pods.go:61] "etcd-embed-certs-825613" [3213929f-0200-4e7d-9b69-967463bfc194] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 23:59:08.461305   83900 system_pods.go:61] "kube-apiserver-embed-certs-825613" [d64b8c55-7da1-4b8e-864e-0727676b17a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 23:59:08.461313   83900 system_pods.go:61] "kube-controller-manager-embed-certs-825613" [61372146-3fe1-4f4e-9d8e-d85cc5ac8369] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 23:59:08.461322   83900 system_pods.go:61] "kube-proxy-rn6fg" [6db02558-bfa6-4c5f-a120-aed13575b273] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 23:59:08.461329   83900 system_pods.go:61] "kube-scheduler-embed-certs-825613" [b9c75ecd-add3-4a19-86e0-08326eea0f6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 23:59:08.461346   83900 system_pods.go:61] "metrics-server-6867b74b74-hg7c5" [2a657b1b-4435-42b5-aef2-deebf7865c83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 23:59:08.461358   83900 system_pods.go:61] "storage-provisioner" [e5cabae9-bb71-4b5e-9a43-0dce4a0733a1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 23:59:08.461368   83900 system_pods.go:74] duration metric: took 10.019045ms to wait for pod list to return data ...
	I1209 23:59:08.461381   83900 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:59:08.464945   83900 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 23:59:08.464973   83900 node_conditions.go:123] node cpu capacity is 2
	I1209 23:59:08.464986   83900 node_conditions.go:105] duration metric: took 3.600013ms to run NodePressure ...
	I1209 23:59:08.465011   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:08.762283   83900 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 23:59:08.766618   83900 kubeadm.go:739] kubelet initialised
	I1209 23:59:08.766640   83900 kubeadm.go:740] duration metric: took 4.332483ms waiting for restarted kubelet to initialise ...
	I1209 23:59:08.766648   83900 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:08.771039   83900 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.775795   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.775820   83900 pod_ready.go:82] duration metric: took 4.756744ms for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.775829   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.775836   83900 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.780473   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "etcd-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.780498   83900 pod_ready.go:82] duration metric: took 4.651756ms for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.780507   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "etcd-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.780517   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.785226   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.785252   83900 pod_ready.go:82] duration metric: took 4.725948ms for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.785261   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.785268   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.855086   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.855117   83900 pod_ready.go:82] duration metric: took 69.839948ms for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.855129   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.855141   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:09.255415   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-proxy-rn6fg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.255451   83900 pod_ready.go:82] duration metric: took 400.29383ms for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:09.255461   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-proxy-rn6fg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.255467   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:09.654750   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.654775   83900 pod_ready.go:82] duration metric: took 399.301549ms for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:09.654785   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.654792   83900 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:10.054952   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:10.054979   83900 pod_ready.go:82] duration metric: took 400.177675ms for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:10.054995   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:10.055002   83900 pod_ready.go:39] duration metric: took 1.288346997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:10.055019   83900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 23:59:10.066533   83900 ops.go:34] apiserver oom_adj: -16
	I1209 23:59:10.066559   83900 kubeadm.go:597] duration metric: took 9.298658158s to restartPrimaryControlPlane
	I1209 23:59:10.066570   83900 kubeadm.go:394] duration metric: took 9.350595042s to StartCluster
	I1209 23:59:10.066590   83900 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:10.066674   83900 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:59:10.068469   83900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:10.068732   83900 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:59:10.068795   83900 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 23:59:10.068901   83900 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-825613"
	I1209 23:59:10.068925   83900 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-825613"
	W1209 23:59:10.068932   83900 addons.go:243] addon storage-provisioner should already be in state true
	I1209 23:59:10.068966   83900 config.go:182] Loaded profile config "embed-certs-825613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:59:10.068960   83900 addons.go:69] Setting default-storageclass=true in profile "embed-certs-825613"
	I1209 23:59:10.068969   83900 host.go:66] Checking if "embed-certs-825613" exists ...
	I1209 23:59:10.069011   83900 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-825613"
	I1209 23:59:10.068972   83900 addons.go:69] Setting metrics-server=true in profile "embed-certs-825613"
	I1209 23:59:10.069584   83900 addons.go:234] Setting addon metrics-server=true in "embed-certs-825613"
	W1209 23:59:10.069616   83900 addons.go:243] addon metrics-server should already be in state true
	I1209 23:59:10.069644   83900 host.go:66] Checking if "embed-certs-825613" exists ...
	I1209 23:59:10.070176   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.070220   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.070219   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.070255   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.070947   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.071024   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.071039   83900 out.go:177] * Verifying Kubernetes components...
	I1209 23:59:10.073122   83900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:10.085793   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I1209 23:59:10.086012   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I1209 23:59:10.086282   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.086412   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.086794   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.086823   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.086907   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.086930   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.087156   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38521
	I1209 23:59:10.087160   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.087251   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.087469   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.087617   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.087792   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.087828   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.088155   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.088177   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.088692   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.089323   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.089358   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.090339   83900 addons.go:234] Setting addon default-storageclass=true in "embed-certs-825613"
	W1209 23:59:10.090357   83900 addons.go:243] addon default-storageclass should already be in state true
	I1209 23:59:10.090379   83900 host.go:66] Checking if "embed-certs-825613" exists ...
	I1209 23:59:10.090609   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.090639   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.103251   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44103
	I1209 23:59:10.103791   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.104305   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.104325   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.104376   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37713
	I1209 23:59:10.104560   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
	I1209 23:59:10.104713   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.104736   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.104848   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.104854   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.105269   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.105285   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.105393   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.105414   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.105595   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.105751   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.105784   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.106203   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.106235   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.107268   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:59:10.107710   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:59:10.109781   83900 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:10.109863   83900 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 23:59:10.111198   83900 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:59:10.111210   83900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 23:59:10.111224   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:59:10.111294   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 23:59:10.111300   83900 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 23:59:10.111309   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:59:10.114610   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.114929   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.114962   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:59:10.114980   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.115159   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:59:10.115318   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:59:10.115438   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:59:10.115474   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:59:10.115516   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.115599   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:59:10.115704   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:59:10.115844   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:59:10.115944   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:59:10.116149   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:59:10.123450   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1209 23:59:10.123926   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.124406   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.124445   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.124744   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.124936   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.126437   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:59:10.126619   83900 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 23:59:10.126637   83900 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 23:59:10.126656   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:59:10.129218   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.129663   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:59:10.129695   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.129874   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:59:10.130055   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:59:10.130164   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:59:10.130296   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:59:10.272711   83900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:10.290121   83900 node_ready.go:35] waiting up to 6m0s for node "embed-certs-825613" to be "Ready" ...
	I1209 23:59:10.379383   83900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:59:10.397672   83900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 23:59:10.403543   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 23:59:10.403591   83900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 23:59:10.439890   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 23:59:10.439915   83900 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 23:59:10.482300   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:59:10.482331   83900 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 23:59:10.550923   83900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:59:11.613572   83900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.234149831s)
	I1209 23:59:11.613622   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.613634   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.613655   83900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.21594498s)
	I1209 23:59:11.613701   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.613713   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.613929   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.613976   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.613992   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.613979   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.614006   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.614018   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.614032   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.614052   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.614000   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.614085   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.614332   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.614343   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.614343   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.614361   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.614362   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.614372   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.621731   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.621749   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.622001   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.622017   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.664217   83900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113253318s)
	I1209 23:59:11.664273   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.664288   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.664633   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.664637   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.664649   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.664658   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.664676   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.664875   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.664888   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.664903   83900 addons.go:475] Verifying addon metrics-server=true in "embed-certs-825613"
	I1209 23:59:11.666814   83900 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1209 23:59:09.899886   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:09.900284   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:09.900314   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:09.900262   85338 retry.go:31] will retry after 3.641244983s: waiting for machine to come up
	I1209 23:59:11.668002   83900 addons.go:510] duration metric: took 1.599215886s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1209 23:59:12.293475   83900 node_ready.go:53] node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:14.960350   84547 start.go:364] duration metric: took 3m49.343321897s to acquireMachinesLock for "old-k8s-version-720064"
	I1209 23:59:14.960428   84547 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:59:14.960440   84547 fix.go:54] fixHost starting: 
	I1209 23:59:14.960886   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:14.960950   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:14.981976   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37057
	I1209 23:59:14.982425   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:14.982933   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:59:14.982966   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:14.983341   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:14.983587   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:14.983772   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetState
	I1209 23:59:14.985748   84547 fix.go:112] recreateIfNeeded on old-k8s-version-720064: state=Stopped err=<nil>
	I1209 23:59:14.985774   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	W1209 23:59:14.985968   84547 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:59:14.987652   84547 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-720064" ...
	I1209 23:59:14.988869   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .Start
	I1209 23:59:14.989086   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring networks are active...
	I1209 23:59:14.989817   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring network default is active
	I1209 23:59:14.990188   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring network mk-old-k8s-version-720064 is active
	I1209 23:59:14.990640   84547 main.go:141] libmachine: (old-k8s-version-720064) Getting domain xml...
	I1209 23:59:14.991392   84547 main.go:141] libmachine: (old-k8s-version-720064) Creating domain...
	I1209 23:59:13.544971   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.545381   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Found IP for machine: 192.168.72.54
	I1209 23:59:13.545408   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has current primary IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.545418   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Reserving static IP address...
	I1209 23:59:13.545913   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-871210", mac: "52:54:00:5e:5b:a3", ip: "192.168.72.54"} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.545942   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | skip adding static IP to network mk-default-k8s-diff-port-871210 - found existing host DHCP lease matching {name: "default-k8s-diff-port-871210", mac: "52:54:00:5e:5b:a3", ip: "192.168.72.54"}
	I1209 23:59:13.545957   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Reserved static IP address: 192.168.72.54
	I1209 23:59:13.545973   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for SSH to be available...
	I1209 23:59:13.545988   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Getting to WaitForSSH function...
	I1209 23:59:13.548279   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.548641   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.548700   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.548788   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Using SSH client type: external
	I1209 23:59:13.548812   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa (-rw-------)
	I1209 23:59:13.548882   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:59:13.548908   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | About to run SSH command:
	I1209 23:59:13.548923   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | exit 0
	I1209 23:59:13.687704   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | SSH cmd err, output: <nil>: 
	I1209 23:59:13.688021   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetConfigRaw
	I1209 23:59:13.688673   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:13.690853   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.691187   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.691224   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.691421   84259 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/config.json ...
	I1209 23:59:13.691646   84259 machine.go:93] provisionDockerMachine start ...
	I1209 23:59:13.691665   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:13.691901   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:13.693958   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.694230   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.694257   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.694392   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:13.694602   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.694771   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.694913   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:13.695093   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:13.695278   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:13.695288   84259 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:59:13.803602   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:59:13.803629   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetMachineName
	I1209 23:59:13.803904   84259 buildroot.go:166] provisioning hostname "default-k8s-diff-port-871210"
	I1209 23:59:13.803928   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetMachineName
	I1209 23:59:13.804106   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:13.806369   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.806769   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.806800   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.806906   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:13.807053   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.807222   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.807329   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:13.807551   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:13.807751   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:13.807765   84259 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-871210 && echo "default-k8s-diff-port-871210" | sudo tee /etc/hostname
	I1209 23:59:13.932849   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-871210
	
	I1209 23:59:13.932874   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:13.935573   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.935943   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.935972   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.936143   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:13.936388   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.936546   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.936682   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:13.936840   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:13.937016   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:13.937046   84259 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-871210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-871210/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-871210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:59:14.057493   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:59:14.057526   84259 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:59:14.057565   84259 buildroot.go:174] setting up certificates
	I1209 23:59:14.057575   84259 provision.go:84] configureAuth start
	I1209 23:59:14.057586   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetMachineName
	I1209 23:59:14.057860   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:14.060619   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.060947   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.060973   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.061170   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.063591   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.063919   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.063946   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.064133   84259 provision.go:143] copyHostCerts
	I1209 23:59:14.064180   84259 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:59:14.064189   84259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:59:14.064242   84259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:59:14.064333   84259 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:59:14.064341   84259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:59:14.064364   84259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:59:14.064416   84259 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:59:14.064424   84259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:59:14.064442   84259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:59:14.064488   84259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-871210 san=[127.0.0.1 192.168.72.54 default-k8s-diff-port-871210 localhost minikube]
	I1209 23:59:14.332090   84259 provision.go:177] copyRemoteCerts
	I1209 23:59:14.332148   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:59:14.332174   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.334856   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.335275   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.335309   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.335428   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.335647   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.335860   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.336002   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:14.422641   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:59:14.445943   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1209 23:59:14.468188   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:59:14.490678   84259 provision.go:87] duration metric: took 433.06125ms to configureAuth
	I1209 23:59:14.490710   84259 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:59:14.490883   84259 config.go:182] Loaded profile config "default-k8s-diff-port-871210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:59:14.490953   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.493529   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.493858   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.493879   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.494073   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.494276   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.494435   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.494555   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.494716   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:14.494880   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:14.494899   84259 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:59:14.720424   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:59:14.720452   84259 machine.go:96] duration metric: took 1.028790944s to provisionDockerMachine
	I1209 23:59:14.720468   84259 start.go:293] postStartSetup for "default-k8s-diff-port-871210" (driver="kvm2")
	I1209 23:59:14.720480   84259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:59:14.720496   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.720814   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:59:14.720844   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.723417   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.723817   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.723851   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.723992   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.724171   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.724328   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.724453   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:14.810295   84259 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:59:14.814400   84259 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:59:14.814424   84259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:59:14.814501   84259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:59:14.814595   84259 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:59:14.814706   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:59:14.823765   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:14.847185   84259 start.go:296] duration metric: took 126.701807ms for postStartSetup
	I1209 23:59:14.847226   84259 fix.go:56] duration metric: took 19.803160459s for fixHost
	I1209 23:59:14.847245   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.850067   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.850488   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.850516   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.850697   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.850915   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.851082   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.851218   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.851369   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:14.851558   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:14.851588   84259 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:59:14.960167   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788754.933814431
	
	I1209 23:59:14.960190   84259 fix.go:216] guest clock: 1733788754.933814431
	I1209 23:59:14.960198   84259 fix.go:229] Guest: 2024-12-09 23:59:14.933814431 +0000 UTC Remote: 2024-12-09 23:59:14.847230309 +0000 UTC m=+262.142902203 (delta=86.584122ms)
	I1209 23:59:14.960217   84259 fix.go:200] guest clock delta is within tolerance: 86.584122ms
	I1209 23:59:14.960222   84259 start.go:83] releasing machines lock for "default-k8s-diff-port-871210", held for 19.916185392s
	I1209 23:59:14.960244   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.960544   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:14.963243   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.963653   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.963693   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.963820   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.964285   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.964505   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.964612   84259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:59:14.964689   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.964749   84259 ssh_runner.go:195] Run: cat /version.json
	I1209 23:59:14.964772   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.967430   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.967811   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.967844   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.967871   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.968018   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.968194   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.968388   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.968405   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.968427   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.968563   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.968562   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:14.968706   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.968878   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.969038   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:15.072098   84259 ssh_runner.go:195] Run: systemctl --version
	I1209 23:59:15.077846   84259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:59:15.222108   84259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:59:15.230013   84259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:59:15.230097   84259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:59:15.247065   84259 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:59:15.247091   84259 start.go:495] detecting cgroup driver to use...
	I1209 23:59:15.247168   84259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:59:15.268262   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:59:15.283824   84259 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:59:15.283893   84259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:59:15.299384   84259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:59:15.318727   84259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:59:15.457157   84259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:59:15.642629   84259 docker.go:233] disabling docker service ...
	I1209 23:59:15.642718   84259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:59:15.661646   84259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:59:15.681887   84259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:59:15.838344   84259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:59:15.971028   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:59:15.984896   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:59:16.006631   84259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:59:16.006691   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.019058   84259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:59:16.019143   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.031838   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.043176   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.057386   84259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:59:16.068694   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.083326   84259 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.102288   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.113538   84259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:59:16.126450   84259 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:59:16.126516   84259 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:59:16.147057   84259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:59:16.157329   84259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:16.285726   84259 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:59:16.385931   84259 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:59:16.386014   84259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:59:16.390840   84259 start.go:563] Will wait 60s for crictl version
	I1209 23:59:16.390912   84259 ssh_runner.go:195] Run: which crictl
	I1209 23:59:16.394870   84259 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:59:16.438893   84259 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:59:16.438986   84259 ssh_runner.go:195] Run: crio --version
	I1209 23:59:16.470118   84259 ssh_runner.go:195] Run: crio --version
	I1209 23:59:16.499603   84259 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:59:16.500665   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:16.503766   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:16.504242   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:16.504329   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:16.504609   84259 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1209 23:59:16.508742   84259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:16.520816   84259 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-871210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-871210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:59:16.520944   84259 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:59:16.520988   84259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:16.563220   84259 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:59:16.563315   84259 ssh_runner.go:195] Run: which lz4
	I1209 23:59:16.568962   84259 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:59:16.573715   84259 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:59:16.573756   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 23:59:14.294886   83900 node_ready.go:53] node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:16.796597   83900 node_ready.go:53] node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:17.795347   83900 node_ready.go:49] node "embed-certs-825613" has status "Ready":"True"
	I1209 23:59:17.795381   83900 node_ready.go:38] duration metric: took 7.505214814s for node "embed-certs-825613" to be "Ready" ...
	I1209 23:59:17.795394   83900 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:17.801650   83900 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:17.808087   83900 pod_ready.go:93] pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:17.808113   83900 pod_ready.go:82] duration metric: took 6.437717ms for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:17.808127   83900 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:16.350842   84547 main.go:141] libmachine: (old-k8s-version-720064) Waiting to get IP...
	I1209 23:59:16.351883   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.352326   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.352393   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.352304   85519 retry.go:31] will retry after 268.149849ms: waiting for machine to come up
	I1209 23:59:16.621742   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.622116   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.622145   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.622066   85519 retry.go:31] will retry after 365.051996ms: waiting for machine to come up
	I1209 23:59:16.988590   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.989124   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.989154   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.989089   85519 retry.go:31] will retry after 441.697933ms: waiting for machine to come up
	I1209 23:59:17.432962   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:17.433453   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:17.433482   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:17.433356   85519 retry.go:31] will retry after 503.173846ms: waiting for machine to come up
	I1209 23:59:17.938107   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:17.938576   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:17.938610   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:17.938533   85519 retry.go:31] will retry after 476.993358ms: waiting for machine to come up
	I1209 23:59:18.417462   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:18.418037   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:18.418064   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:18.417989   85519 retry.go:31] will retry after 732.449849ms: waiting for machine to come up
	I1209 23:59:19.152120   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:19.152680   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:19.152708   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:19.152624   85519 retry.go:31] will retry after 764.17794ms: waiting for machine to come up
	I1209 23:59:19.918630   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:19.919113   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:19.919141   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:19.919066   85519 retry.go:31] will retry after 1.072352346s: waiting for machine to come up
	I1209 23:59:17.907764   84259 crio.go:462] duration metric: took 1.338836821s to copy over tarball
	I1209 23:59:17.907846   84259 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:59:20.133171   84259 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.22529043s)
	I1209 23:59:20.133214   84259 crio.go:469] duration metric: took 2.225420822s to extract the tarball
	I1209 23:59:20.133224   84259 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:59:20.169786   84259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:20.211990   84259 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:59:20.212020   84259 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:59:20.212030   84259 kubeadm.go:934] updating node { 192.168.72.54 8444 v1.31.2 crio true true} ...
	I1209 23:59:20.212166   84259 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-871210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-871210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:59:20.212247   84259 ssh_runner.go:195] Run: crio config
	I1209 23:59:20.255141   84259 cni.go:84] Creating CNI manager for ""
	I1209 23:59:20.255169   84259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:20.255182   84259 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:59:20.255215   84259 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.54 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-871210 NodeName:default-k8s-diff-port-871210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:59:20.255338   84259 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.54
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-871210"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.54"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.54"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:59:20.255407   84259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:59:20.265160   84259 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:59:20.265244   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:59:20.275799   84259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1209 23:59:20.292089   84259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:59:20.308054   84259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1209 23:59:20.327464   84259 ssh_runner.go:195] Run: grep 192.168.72.54	control-plane.minikube.internal$ /etc/hosts
	I1209 23:59:20.331101   84259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:20.343232   84259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:20.473225   84259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:20.492069   84259 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210 for IP: 192.168.72.54
	I1209 23:59:20.492098   84259 certs.go:194] generating shared ca certs ...
	I1209 23:59:20.492119   84259 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:20.492311   84259 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:59:20.492363   84259 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:59:20.492378   84259 certs.go:256] generating profile certs ...
	I1209 23:59:20.492499   84259 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/client.key
	I1209 23:59:20.492605   84259 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/apiserver.key.9cc284f2
	I1209 23:59:20.492663   84259 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/proxy-client.key
	I1209 23:59:20.492831   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:59:20.492872   84259 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:59:20.492886   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:59:20.492918   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:59:20.492951   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:59:20.492997   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:59:20.493071   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:20.494023   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:59:20.543769   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:59:20.583697   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:59:20.615170   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:59:20.638784   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1209 23:59:20.673708   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 23:59:20.696690   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:59:20.719682   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:59:20.744292   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:59:20.767643   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:59:20.790927   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:59:20.816746   84259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:59:20.834150   84259 ssh_runner.go:195] Run: openssl version
	I1209 23:59:20.840153   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:59:20.851006   84259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:59:20.855430   84259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:59:20.855492   84259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:59:20.860866   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:59:20.871983   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:59:20.882642   84259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:20.886978   84259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:20.887050   84259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:20.892472   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:59:20.902959   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:59:20.913774   84259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:59:20.918344   84259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:59:20.918394   84259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:59:20.923981   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:59:20.934931   84259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:59:20.939662   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:59:20.945633   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:59:20.951650   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:59:20.957628   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:59:20.963600   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:59:20.969399   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:59:20.974992   84259 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-871210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-871210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:20.975088   84259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:59:20.975143   84259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:21.012553   84259 cri.go:89] found id: ""
	I1209 23:59:21.012647   84259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:59:21.023728   84259 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:59:21.023758   84259 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:59:21.023808   84259 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:59:21.033788   84259 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:59:21.034806   84259 kubeconfig.go:125] found "default-k8s-diff-port-871210" server: "https://192.168.72.54:8444"
	I1209 23:59:21.036852   84259 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:59:21.046251   84259 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.54
	I1209 23:59:21.046280   84259 kubeadm.go:1160] stopping kube-system containers ...
	I1209 23:59:21.046292   84259 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 23:59:21.046354   84259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:21.084915   84259 cri.go:89] found id: ""
	I1209 23:59:21.085000   84259 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 23:59:21.100592   84259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:59:21.111180   84259 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:59:21.111204   84259 kubeadm.go:157] found existing configuration files:
	
	I1209 23:59:21.111254   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1209 23:59:21.120027   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:59:21.120087   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:59:21.129165   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1209 23:59:21.138254   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:59:21.138315   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:59:21.147276   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1209 23:59:21.155635   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:59:21.155711   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:59:21.164794   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1209 23:59:21.173421   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:59:21.173477   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:59:21.182615   84259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:59:21.191795   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:21.289295   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.415644   84259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.126305379s)
	I1209 23:59:22.415695   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.616211   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.672571   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.731282   84259 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:59:22.731363   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:19.816966   83900 pod_ready.go:103] pod "etcd-embed-certs-825613" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:21.814625   83900 pod_ready.go:93] pod "etcd-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.814650   83900 pod_ready.go:82] duration metric: took 4.006513871s for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.814662   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.819611   83900 pod_ready.go:93] pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.819634   83900 pod_ready.go:82] duration metric: took 4.964081ms for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.819647   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.823994   83900 pod_ready.go:93] pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.824016   83900 pod_ready.go:82] duration metric: took 4.360902ms for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.824028   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.829102   83900 pod_ready.go:93] pod "kube-proxy-rn6fg" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.829128   83900 pod_ready.go:82] duration metric: took 5.090595ms for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.829161   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:23.983548   83900 pod_ready.go:93] pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:23.983603   83900 pod_ready.go:82] duration metric: took 2.154427639s for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:23.983620   83900 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:20.992747   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:20.993138   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:20.993196   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:20.993111   85519 retry.go:31] will retry after 1.479639772s: waiting for machine to come up
	I1209 23:59:22.474889   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:22.475414   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:22.475445   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:22.475364   85519 retry.go:31] will retry after 2.248463233s: waiting for machine to come up
	I1209 23:59:24.725307   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:24.725765   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:24.725796   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:24.725706   85519 retry.go:31] will retry after 2.803512929s: waiting for machine to come up
	I1209 23:59:23.231575   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:23.731704   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:24.231740   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:24.265107   84259 api_server.go:72] duration metric: took 1.533824471s to wait for apiserver process to appear ...
	I1209 23:59:24.265144   84259 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:59:24.265173   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:24.265664   84259 api_server.go:269] stopped: https://192.168.72.54:8444/healthz: Get "https://192.168.72.54:8444/healthz": dial tcp 192.168.72.54:8444: connect: connection refused
	I1209 23:59:24.765571   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.251990   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:27.252018   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:27.252033   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.284733   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:27.284769   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:27.284781   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.313967   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:27.313998   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:25.990720   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:28.490332   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:27.530567   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:27.531086   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:27.531120   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:27.531022   85519 retry.go:31] will retry after 2.800227475s: waiting for machine to come up
	I1209 23:59:30.333201   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:30.333660   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:30.333684   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:30.333621   85519 retry.go:31] will retry after 3.041733113s: waiting for machine to come up
	I1209 23:59:27.765806   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.773528   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:27.773564   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:28.266012   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:28.275940   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:28.275965   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:28.765502   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:28.770695   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:28.770725   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:29.265309   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:29.270844   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:29.270883   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:29.765323   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:29.769949   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:29.769974   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:30.265981   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:30.270284   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:30.270313   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:30.765915   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:30.771542   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I1209 23:59:30.781357   84259 api_server.go:141] control plane version: v1.31.2
	I1209 23:59:30.781390   84259 api_server.go:131] duration metric: took 6.516238077s to wait for apiserver health ...
	I1209 23:59:30.781400   84259 cni.go:84] Creating CNI manager for ""
	I1209 23:59:30.781409   84259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:30.783438   84259 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 23:59:30.784794   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 23:59:30.794916   84259 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 23:59:30.812099   84259 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:59:30.822915   84259 system_pods.go:59] 8 kube-system pods found
	I1209 23:59:30.822967   84259 system_pods.go:61] "coredns-7c65d6cfc9-wclgl" [22b2e24e-5a03-4d4f-a071-68e414aaf6cf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 23:59:30.822980   84259 system_pods.go:61] "etcd-default-k8s-diff-port-871210" [2058a33f-dd4b-42bc-94d9-4cd130b9389c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 23:59:30.822990   84259 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-871210" [26ffd01d-48b5-4e38-a91b-bf75dd75e3c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 23:59:30.823000   84259 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-871210" [cea3a810-c7e9-48d6-9fd7-1f6258c45387] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 23:59:30.823006   84259 system_pods.go:61] "kube-proxy-d7sxm" [e28ccb5f-a282-4371-ae1a-fca52dd58616] Running
	I1209 23:59:30.823021   84259 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-871210" [790619f6-29a2-4bbc-89ba-a7b93653459b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 23:59:30.823033   84259 system_pods.go:61] "metrics-server-6867b74b74-lgzdz" [2c251249-58b6-44c4-929a-0f5c963d83b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 23:59:30.823039   84259 system_pods.go:61] "storage-provisioner" [63078ee5-81b8-4548-88bf-2b89857ea01a] Running
	I1209 23:59:30.823050   84259 system_pods.go:74] duration metric: took 10.914419ms to wait for pod list to return data ...
	I1209 23:59:30.823063   84259 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:59:30.828012   84259 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 23:59:30.828062   84259 node_conditions.go:123] node cpu capacity is 2
	I1209 23:59:30.828076   84259 node_conditions.go:105] duration metric: took 5.004481ms to run NodePressure ...
	I1209 23:59:30.828097   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:31.092839   84259 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 23:59:31.097588   84259 kubeadm.go:739] kubelet initialised
	I1209 23:59:31.097613   84259 kubeadm.go:740] duration metric: took 4.744115ms waiting for restarted kubelet to initialise ...
	I1209 23:59:31.097623   84259 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:31.104318   84259 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:30.991511   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:33.490390   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:34.824186   83859 start.go:364] duration metric: took 55.338760885s to acquireMachinesLock for "no-preload-048296"
	I1209 23:59:34.824245   83859 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:59:34.824272   83859 fix.go:54] fixHost starting: 
	I1209 23:59:34.824660   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:34.824713   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:34.842421   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35591
	I1209 23:59:34.842851   83859 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:34.843297   83859 main.go:141] libmachine: Using API Version  1
	I1209 23:59:34.843319   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:34.843701   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:34.843916   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:34.844066   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1209 23:59:34.845768   83859 fix.go:112] recreateIfNeeded on no-preload-048296: state=Stopped err=<nil>
	I1209 23:59:34.845792   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	W1209 23:59:34.845958   83859 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:59:34.847617   83859 out.go:177] * Restarting existing kvm2 VM for "no-preload-048296" ...
	I1209 23:59:33.378848   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.379297   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has current primary IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.379326   84547 main.go:141] libmachine: (old-k8s-version-720064) Found IP for machine: 192.168.39.188
	I1209 23:59:33.379351   84547 main.go:141] libmachine: (old-k8s-version-720064) Reserving static IP address...
	I1209 23:59:33.379701   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "old-k8s-version-720064", mac: "52:54:00:a1:91:00", ip: "192.168.39.188"} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.379722   84547 main.go:141] libmachine: (old-k8s-version-720064) Reserved static IP address: 192.168.39.188
	I1209 23:59:33.379743   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | skip adding static IP to network mk-old-k8s-version-720064 - found existing host DHCP lease matching {name: "old-k8s-version-720064", mac: "52:54:00:a1:91:00", ip: "192.168.39.188"}
	I1209 23:59:33.379759   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Getting to WaitForSSH function...
	I1209 23:59:33.379776   84547 main.go:141] libmachine: (old-k8s-version-720064) Waiting for SSH to be available...
	I1209 23:59:33.381697   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.381990   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.382027   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.382137   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Using SSH client type: external
	I1209 23:59:33.382163   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa (-rw-------)
	I1209 23:59:33.382193   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:59:33.382212   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | About to run SSH command:
	I1209 23:59:33.382229   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | exit 0
	I1209 23:59:33.507653   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | SSH cmd err, output: <nil>: 
	I1209 23:59:33.507957   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetConfigRaw
	I1209 23:59:33.508555   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:33.511020   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.511399   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.511431   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.511695   84547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/config.json ...
	I1209 23:59:33.511932   84547 machine.go:93] provisionDockerMachine start ...
	I1209 23:59:33.511958   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:33.512144   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.514383   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.514722   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.514743   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.514876   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.515048   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.515187   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.515349   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.515596   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.515835   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.515850   84547 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:59:33.623370   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:59:33.623407   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.623670   84547 buildroot.go:166] provisioning hostname "old-k8s-version-720064"
	I1209 23:59:33.623708   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.623909   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.626370   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.626749   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.626781   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.626943   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.627172   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.627348   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.627490   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.627666   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.627868   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.627884   84547 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-720064 && echo "old-k8s-version-720064" | sudo tee /etc/hostname
	I1209 23:59:33.749338   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-720064
	
	I1209 23:59:33.749370   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.752401   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.752774   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.752807   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.753000   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.753180   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.753372   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.753531   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.753674   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.753837   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.753853   84547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-720064' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-720064/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-720064' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:59:33.872512   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:59:33.872548   84547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:59:33.872575   84547 buildroot.go:174] setting up certificates
	I1209 23:59:33.872585   84547 provision.go:84] configureAuth start
	I1209 23:59:33.872597   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.872906   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:33.875719   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.876012   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.876051   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.876210   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.878539   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.878857   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.878894   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.879025   84547 provision.go:143] copyHostCerts
	I1209 23:59:33.879087   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:59:33.879100   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:59:33.879166   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:59:33.879254   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:59:33.879262   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:59:33.879283   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:59:33.879338   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:59:33.879346   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:59:33.879365   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:59:33.879414   84547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-720064 san=[127.0.0.1 192.168.39.188 localhost minikube old-k8s-version-720064]
	I1209 23:59:34.211417   84547 provision.go:177] copyRemoteCerts
	I1209 23:59:34.211475   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:59:34.211504   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.214285   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.214649   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.214686   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.214789   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.215006   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.215180   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.215345   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.301399   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:59:34.326216   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:59:34.349136   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 23:59:34.371597   84547 provision.go:87] duration metric: took 498.999263ms to configureAuth
	I1209 23:59:34.371633   84547 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:59:34.371832   84547 config.go:182] Loaded profile config "old-k8s-version-720064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 23:59:34.371902   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.374649   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.374985   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.375031   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.375161   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.375368   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.375541   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.375735   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.375897   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:34.376128   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:34.376152   84547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:59:34.588357   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:59:34.588391   84547 machine.go:96] duration metric: took 1.076442399s to provisionDockerMachine
	I1209 23:59:34.588402   84547 start.go:293] postStartSetup for "old-k8s-version-720064" (driver="kvm2")
	I1209 23:59:34.588412   84547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:59:34.588443   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.588749   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:59:34.588772   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.591413   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.591805   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.591836   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.592001   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.592176   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.592325   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.592431   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.673445   84547 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:59:34.677394   84547 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:59:34.677421   84547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:59:34.677498   84547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:59:34.677600   84547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:59:34.677716   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:59:34.686261   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:34.709412   84547 start.go:296] duration metric: took 120.993275ms for postStartSetup
	I1209 23:59:34.709467   84547 fix.go:56] duration metric: took 19.749026723s for fixHost
	I1209 23:59:34.709495   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.712458   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.712762   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.712796   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.712930   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.713158   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.713335   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.713506   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.713690   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:34.713852   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:34.713862   84547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:59:34.823993   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788774.778513264
	
	I1209 23:59:34.824022   84547 fix.go:216] guest clock: 1733788774.778513264
	I1209 23:59:34.824034   84547 fix.go:229] Guest: 2024-12-09 23:59:34.778513264 +0000 UTC Remote: 2024-12-09 23:59:34.709473288 +0000 UTC m=+249.236536627 (delta=69.039976ms)
	I1209 23:59:34.824067   84547 fix.go:200] guest clock delta is within tolerance: 69.039976ms
	I1209 23:59:34.824075   84547 start.go:83] releasing machines lock for "old-k8s-version-720064", held for 19.863672525s
	I1209 23:59:34.824110   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.824391   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:34.827165   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.827596   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.827629   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.827894   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828467   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828690   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828802   84547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:59:34.828865   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.828888   84547 ssh_runner.go:195] Run: cat /version.json
	I1209 23:59:34.828917   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.831978   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832052   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832514   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.832548   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.832572   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832748   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832751   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.832899   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.832982   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.832986   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.833198   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.833217   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.833370   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.833374   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.949044   84547 ssh_runner.go:195] Run: systemctl --version
	I1209 23:59:34.954999   84547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:59:35.100096   84547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:59:35.105764   84547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:59:35.105852   84547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:59:35.126893   84547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:59:35.126915   84547 start.go:495] detecting cgroup driver to use...
	I1209 23:59:35.126968   84547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:59:35.143236   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:59:35.157019   84547 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:59:35.157098   84547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:59:35.170608   84547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:59:35.184816   84547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:59:35.310043   84547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:59:35.486472   84547 docker.go:233] disabling docker service ...
	I1209 23:59:35.486653   84547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:59:35.501938   84547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:59:35.516997   84547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:59:35.667304   84547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:59:35.821316   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:59:35.836423   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:59:35.854633   84547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1209 23:59:35.854719   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.865592   84547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:59:35.865677   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.877247   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.889007   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.900196   84547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:59:35.912403   84547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:59:35.922919   84547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:59:35.922987   84547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:59:35.939439   84547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:59:35.949715   84547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:36.100115   84547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:59:36.207129   84547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:59:36.207206   84547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:59:36.213672   84547 start.go:563] Will wait 60s for crictl version
	I1209 23:59:36.213758   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:36.218204   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:59:36.255262   84547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:59:36.255367   84547 ssh_runner.go:195] Run: crio --version
	I1209 23:59:36.286277   84547 ssh_runner.go:195] Run: crio --version
	I1209 23:59:36.317334   84547 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1209 23:59:34.848733   83859 main.go:141] libmachine: (no-preload-048296) Calling .Start
	I1209 23:59:34.848943   83859 main.go:141] libmachine: (no-preload-048296) Ensuring networks are active...
	I1209 23:59:34.849641   83859 main.go:141] libmachine: (no-preload-048296) Ensuring network default is active
	I1209 23:59:34.850039   83859 main.go:141] libmachine: (no-preload-048296) Ensuring network mk-no-preload-048296 is active
	I1209 23:59:34.850540   83859 main.go:141] libmachine: (no-preload-048296) Getting domain xml...
	I1209 23:59:34.851224   83859 main.go:141] libmachine: (no-preload-048296) Creating domain...
	I1209 23:59:36.270761   83859 main.go:141] libmachine: (no-preload-048296) Waiting to get IP...
	I1209 23:59:36.271970   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:36.272578   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:36.272645   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:36.272536   85657 retry.go:31] will retry after 253.181092ms: waiting for machine to come up
	I1209 23:59:36.527128   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:36.527735   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:36.527762   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:36.527683   85657 retry.go:31] will retry after 297.608817ms: waiting for machine to come up
	I1209 23:59:36.827372   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:36.828819   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:36.828849   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:36.828763   85657 retry.go:31] will retry after 374.112777ms: waiting for machine to come up
	I1209 23:59:33.112105   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:35.611353   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:36.115472   84259 pod_ready.go:93] pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:36.115504   84259 pod_ready.go:82] duration metric: took 5.011153637s for pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:36.115521   84259 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:35.492415   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:37.992287   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:36.318651   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:36.321927   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:36.322336   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:36.322366   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:36.322619   84547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 23:59:36.327938   84547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:36.344152   84547 kubeadm.go:883] updating cluster {Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:59:36.344279   84547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:59:36.344328   84547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:36.391261   84547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 23:59:36.391348   84547 ssh_runner.go:195] Run: which lz4
	I1209 23:59:36.395391   84547 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:59:36.399585   84547 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:59:36.399622   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1209 23:59:37.969251   84547 crio.go:462] duration metric: took 1.573891158s to copy over tarball
	I1209 23:59:37.969362   84547 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:59:37.204687   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:37.205458   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:37.205479   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:37.205372   85657 retry.go:31] will retry after 420.490569ms: waiting for machine to come up
	I1209 23:59:37.627167   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:37.629813   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:37.629839   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:37.629687   85657 retry.go:31] will retry after 561.795207ms: waiting for machine to come up
	I1209 23:59:38.193652   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:38.194178   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:38.194200   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:38.194119   85657 retry.go:31] will retry after 925.018936ms: waiting for machine to come up
	I1209 23:59:39.121476   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:39.122014   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:39.122046   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:39.121958   85657 retry.go:31] will retry after 984.547761ms: waiting for machine to come up
	I1209 23:59:40.108478   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:40.108947   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:40.108981   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:40.108925   85657 retry.go:31] will retry after 1.261498912s: waiting for machine to come up
	I1209 23:59:41.372403   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:41.372901   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:41.372922   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:41.372884   85657 retry.go:31] will retry after 1.498283156s: waiting for machine to come up
	I1209 23:59:38.122964   84259 pod_ready.go:93] pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:38.122990   84259 pod_ready.go:82] duration metric: took 2.007459606s for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:38.123004   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:40.133257   84259 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:40.631911   84259 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:40.631940   84259 pod_ready.go:82] duration metric: took 2.508926956s for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:40.631957   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:40.492044   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:42.991179   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:41.050494   84547 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081073233s)
	I1209 23:59:41.050524   84547 crio.go:469] duration metric: took 3.081237568s to extract the tarball
	I1209 23:59:41.050533   84547 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:59:41.092694   84547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:41.125264   84547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 23:59:41.125289   84547 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 23:59:41.125360   84547 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.125378   84547 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.125388   84547 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.125360   84547 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.125436   84547 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1209 23:59:41.125466   84547 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.125497   84547 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.125515   84547 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.127285   84547 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.127325   84547 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1209 23:59:41.127376   84547 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.127405   84547 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.127421   84547 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.127293   84547 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.127293   84547 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.127300   84547 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.302277   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.314265   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1209 23:59:41.323683   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.326327   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.345042   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.345550   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.362324   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.367612   84547 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1209 23:59:41.367663   84547 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.367706   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.406644   84547 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1209 23:59:41.406691   84547 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1209 23:59:41.406783   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.450490   84547 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1209 23:59:41.450550   84547 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.450601   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.477332   84547 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1209 23:59:41.477389   84547 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.477438   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.499714   84547 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1209 23:59:41.499757   84547 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.499783   84547 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1209 23:59:41.499806   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.499818   84547 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.499861   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.505128   84547 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1209 23:59:41.505179   84547 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.505205   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.505249   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.505212   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.505306   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.505342   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.507860   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.507923   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.643108   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.644251   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.644273   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.644358   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.644400   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.650732   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.650817   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.777981   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.789388   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.789435   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.789491   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.789561   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.789592   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.802784   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.912370   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.914494   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1209 23:59:41.944460   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1209 23:59:41.944564   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1209 23:59:41.944569   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.944660   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1209 23:59:41.944662   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1209 23:59:41.944726   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1209 23:59:42.104003   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1209 23:59:42.104074   84547 cache_images.go:92] duration metric: took 978.7734ms to LoadCachedImages
	W1209 23:59:42.104176   84547 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1209 23:59:42.104198   84547 kubeadm.go:934] updating node { 192.168.39.188 8443 v1.20.0 crio true true} ...
	I1209 23:59:42.104326   84547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-720064 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.188
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:59:42.104431   84547 ssh_runner.go:195] Run: crio config
	I1209 23:59:42.154016   84547 cni.go:84] Creating CNI manager for ""
	I1209 23:59:42.154041   84547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:42.154050   84547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:59:42.154066   84547 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.188 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-720064 NodeName:old-k8s-version-720064 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.188"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.188 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 23:59:42.154226   84547 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.188
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-720064"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.188
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.188"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:59:42.154292   84547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 23:59:42.164866   84547 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:59:42.164943   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:59:42.175137   84547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1209 23:59:42.192176   84547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:59:42.210695   84547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1209 23:59:42.230364   84547 ssh_runner.go:195] Run: grep 192.168.39.188	control-plane.minikube.internal$ /etc/hosts
	I1209 23:59:42.234239   84547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.188	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:42.246747   84547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:42.379643   84547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:42.396626   84547 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064 for IP: 192.168.39.188
	I1209 23:59:42.396713   84547 certs.go:194] generating shared ca certs ...
	I1209 23:59:42.396746   84547 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:42.396942   84547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:59:42.397007   84547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:59:42.397023   84547 certs.go:256] generating profile certs ...
	I1209 23:59:42.397191   84547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/client.key
	I1209 23:59:42.397270   84547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key.abcbe3fa
	I1209 23:59:42.397330   84547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.key
	I1209 23:59:42.397516   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:59:42.397564   84547 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:59:42.397580   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:59:42.397623   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:59:42.397662   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:59:42.397701   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:59:42.397768   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:42.398726   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:59:42.450053   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:59:42.475685   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:59:42.514396   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:59:42.562383   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 23:59:42.589690   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 23:59:42.614149   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:59:42.647112   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:59:42.671117   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:59:42.694303   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:59:42.722563   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:59:42.753478   84547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:59:42.773673   84547 ssh_runner.go:195] Run: openssl version
	I1209 23:59:42.779930   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:59:42.791531   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.796422   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.796536   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.802386   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:59:42.817058   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:59:42.828570   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.833292   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.833357   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.839679   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:59:42.850729   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:59:42.861699   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.866417   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.866479   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.873602   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:59:42.884398   84547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:59:42.889137   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:59:42.896108   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:59:42.902584   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:59:42.908706   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:59:42.914313   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:59:42.919943   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:59:42.925417   84547 kubeadm.go:392] StartCluster: {Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:42.925546   84547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:59:42.925602   84547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:42.962948   84547 cri.go:89] found id: ""
	I1209 23:59:42.963030   84547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:59:42.973746   84547 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:59:42.973768   84547 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:59:42.973819   84547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:59:42.983593   84547 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:59:42.984528   84547 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-720064" does not appear in /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:59:42.985106   84547 kubeconfig.go:62] /home/jenkins/minikube-integration/19888-18950/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-720064" cluster setting kubeconfig missing "old-k8s-version-720064" context setting]
	I1209 23:59:42.985981   84547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:42.996842   84547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:59:43.007858   84547 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.188
	I1209 23:59:43.007901   84547 kubeadm.go:1160] stopping kube-system containers ...
	I1209 23:59:43.007916   84547 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 23:59:43.007982   84547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:43.046416   84547 cri.go:89] found id: ""
	I1209 23:59:43.046495   84547 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 23:59:43.063226   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:59:43.073919   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:59:43.073946   84547 kubeadm.go:157] found existing configuration files:
	
	I1209 23:59:43.073991   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:59:43.084217   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:59:43.084292   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:59:43.094196   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:59:43.104098   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:59:43.104178   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:59:43.115056   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:59:43.124696   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:59:43.124761   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:59:43.135180   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:59:43.146325   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:59:43.146392   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:59:43.156017   84547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:59:43.166584   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:43.376941   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.198180   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.431002   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.549010   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.644736   84547 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:59:44.644827   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:45.145713   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:42.872724   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:42.873229   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:42.873260   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:42.873178   85657 retry.go:31] will retry after 1.469240763s: waiting for machine to come up
	I1209 23:59:44.344825   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:44.345339   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:44.345374   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:44.345317   85657 retry.go:31] will retry after 1.85673524s: waiting for machine to come up
	I1209 23:59:46.203693   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:46.204128   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:46.204152   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:46.204065   85657 retry.go:31] will retry after 3.5180616s: waiting for machine to come up
	I1209 23:59:43.003893   84259 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:43.734778   84259 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:43.734806   84259 pod_ready.go:82] duration metric: took 3.102839103s for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:43.734821   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-d7sxm" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:43.743393   84259 pod_ready.go:93] pod "kube-proxy-d7sxm" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:43.743421   84259 pod_ready.go:82] duration metric: took 8.592191ms for pod "kube-proxy-d7sxm" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:43.743435   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:44.250028   84259 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:44.250051   84259 pod_ready.go:82] duration metric: took 506.607784ms for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:44.250063   84259 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:46.256096   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:45.494189   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:47.993074   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:45.645297   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:46.145300   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:46.644880   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:47.144955   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:47.645081   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:48.145132   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:48.645278   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.144932   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.645874   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:50.145061   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.725528   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:49.726040   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:49.726069   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:49.725982   85657 retry.go:31] will retry after 3.98915487s: waiting for machine to come up
	I1209 23:59:48.256397   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:50.757098   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:50.491110   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:52.989722   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:50.645171   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:51.144908   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:51.645198   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:52.145700   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:52.645665   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.145782   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.645678   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:54.145521   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:54.645825   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:55.145578   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.718634   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.719141   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has current primary IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.719163   83859 main.go:141] libmachine: (no-preload-048296) Found IP for machine: 192.168.61.182
	I1209 23:59:53.719173   83859 main.go:141] libmachine: (no-preload-048296) Reserving static IP address...
	I1209 23:59:53.719594   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "no-preload-048296", mac: "52:54:00:c6:cf:c7", ip: "192.168.61.182"} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.719621   83859 main.go:141] libmachine: (no-preload-048296) DBG | skip adding static IP to network mk-no-preload-048296 - found existing host DHCP lease matching {name: "no-preload-048296", mac: "52:54:00:c6:cf:c7", ip: "192.168.61.182"}
	I1209 23:59:53.719632   83859 main.go:141] libmachine: (no-preload-048296) Reserved static IP address: 192.168.61.182
	I1209 23:59:53.719641   83859 main.go:141] libmachine: (no-preload-048296) Waiting for SSH to be available...
	I1209 23:59:53.719671   83859 main.go:141] libmachine: (no-preload-048296) DBG | Getting to WaitForSSH function...
	I1209 23:59:53.722015   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.722309   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.722342   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.722445   83859 main.go:141] libmachine: (no-preload-048296) DBG | Using SSH client type: external
	I1209 23:59:53.722471   83859 main.go:141] libmachine: (no-preload-048296) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa (-rw-------)
	I1209 23:59:53.722504   83859 main.go:141] libmachine: (no-preload-048296) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:59:53.722517   83859 main.go:141] libmachine: (no-preload-048296) DBG | About to run SSH command:
	I1209 23:59:53.722529   83859 main.go:141] libmachine: (no-preload-048296) DBG | exit 0
	I1209 23:59:53.843261   83859 main.go:141] libmachine: (no-preload-048296) DBG | SSH cmd err, output: <nil>: 
	I1209 23:59:53.843673   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetConfigRaw
	I1209 23:59:53.844367   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:53.846889   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.847170   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.847203   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.847433   83859 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/config.json ...
	I1209 23:59:53.847656   83859 machine.go:93] provisionDockerMachine start ...
	I1209 23:59:53.847675   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:53.847834   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:53.849867   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.850179   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.850213   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.850372   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:53.850548   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.850702   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.850878   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:53.851058   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:53.851288   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:53.851303   83859 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:59:53.947889   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:59:53.947916   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:59:53.948220   83859 buildroot.go:166] provisioning hostname "no-preload-048296"
	I1209 23:59:53.948259   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:59:53.948496   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:53.951070   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.951479   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.951509   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.951729   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:53.951919   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.952120   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.952280   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:53.952456   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:53.952650   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:53.952672   83859 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-048296 && echo "no-preload-048296" | sudo tee /etc/hostname
	I1209 23:59:54.065706   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-048296
	
	I1209 23:59:54.065738   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.068629   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.068942   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.068975   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.069241   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.069493   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.069731   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.069939   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.070156   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:54.070325   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:54.070346   83859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-048296' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-048296/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-048296' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:59:54.180424   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:59:54.180460   83859 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:59:54.180479   83859 buildroot.go:174] setting up certificates
	I1209 23:59:54.180490   83859 provision.go:84] configureAuth start
	I1209 23:59:54.180499   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:59:54.180802   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:54.183349   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.183703   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.183732   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.183852   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.186076   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.186341   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.186372   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.186498   83859 provision.go:143] copyHostCerts
	I1209 23:59:54.186554   83859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:59:54.186564   83859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:59:54.186627   83859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:59:54.186715   83859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:59:54.186723   83859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:59:54.186744   83859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:59:54.186805   83859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:59:54.186814   83859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:59:54.186831   83859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:59:54.186881   83859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.no-preload-048296 san=[127.0.0.1 192.168.61.182 localhost minikube no-preload-048296]
	I1209 23:59:54.291230   83859 provision.go:177] copyRemoteCerts
	I1209 23:59:54.291296   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:59:54.291319   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.293968   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.294257   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.294283   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.294451   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.294654   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.294814   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.294963   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.377572   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:59:54.401222   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 23:59:54.425323   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 23:59:54.448358   83859 provision.go:87] duration metric: took 267.838318ms to configureAuth
	I1209 23:59:54.448387   83859 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:59:54.448563   83859 config.go:182] Loaded profile config "no-preload-048296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:59:54.448629   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.451082   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.451422   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.451450   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.451678   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.451906   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.452088   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.452222   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.452374   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:54.452550   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:54.452567   83859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:59:54.665629   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:59:54.665663   83859 machine.go:96] duration metric: took 817.991465ms to provisionDockerMachine
	I1209 23:59:54.665677   83859 start.go:293] postStartSetup for "no-preload-048296" (driver="kvm2")
	I1209 23:59:54.665690   83859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:59:54.665712   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.666016   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:59:54.666043   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.668993   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.669501   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.669532   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.669666   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.669859   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.670004   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.670160   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.752122   83859 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:59:54.756546   83859 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:59:54.756569   83859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:59:54.756640   83859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:59:54.756706   83859 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:59:54.756797   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:59:54.766465   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:54.792214   83859 start.go:296] duration metric: took 126.521539ms for postStartSetup
	I1209 23:59:54.792263   83859 fix.go:56] duration metric: took 19.967992145s for fixHost
	I1209 23:59:54.792284   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.794921   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.795304   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.795334   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.795549   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.795807   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.795986   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.796154   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.796333   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:54.796485   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:54.796495   83859 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:59:54.900382   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788794.870799171
	
	I1209 23:59:54.900413   83859 fix.go:216] guest clock: 1733788794.870799171
	I1209 23:59:54.900423   83859 fix.go:229] Guest: 2024-12-09 23:59:54.870799171 +0000 UTC Remote: 2024-12-09 23:59:54.792267927 +0000 UTC m=+357.892420200 (delta=78.531244ms)
	I1209 23:59:54.900443   83859 fix.go:200] guest clock delta is within tolerance: 78.531244ms
	I1209 23:59:54.900448   83859 start.go:83] releasing machines lock for "no-preload-048296", held for 20.076230285s
	I1209 23:59:54.900466   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.900763   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:54.903437   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.903762   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.903785   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.903941   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.904412   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.904583   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.904674   83859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:59:54.904735   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.904788   83859 ssh_runner.go:195] Run: cat /version.json
	I1209 23:59:54.904815   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.907540   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.907693   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.907936   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.907960   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.908083   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.908098   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.908108   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.908269   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.908332   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.908431   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.908578   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.908607   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.908750   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.908753   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.980588   83859 ssh_runner.go:195] Run: systemctl --version
	I1209 23:59:55.003079   83859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:59:55.152159   83859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:59:55.158212   83859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:59:55.158284   83859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:59:55.177510   83859 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:59:55.177539   83859 start.go:495] detecting cgroup driver to use...
	I1209 23:59:55.177616   83859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:59:55.194262   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:59:55.210699   83859 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:59:55.210770   83859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:59:55.226308   83859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:59:55.242175   83859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:59:55.361845   83859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:59:55.500415   83859 docker.go:233] disabling docker service ...
	I1209 23:59:55.500487   83859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:59:55.515689   83859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:59:55.528651   83859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:59:55.663341   83859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:59:55.776773   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:59:55.790155   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:59:55.807749   83859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:59:55.807807   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.817580   83859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:59:55.817644   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.827975   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.837871   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.848118   83859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:59:55.858322   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.867754   83859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.884626   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.894254   83859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:59:55.903128   83859 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:59:55.903187   83859 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:59:55.914887   83859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:59:55.924665   83859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:56.028206   83859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:59:56.117484   83859 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:59:56.117573   83859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:59:56.122345   83859 start.go:563] Will wait 60s for crictl version
	I1209 23:59:56.122401   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.126032   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:59:56.161884   83859 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:59:56.161978   83859 ssh_runner.go:195] Run: crio --version
	I1209 23:59:56.194758   83859 ssh_runner.go:195] Run: crio --version
	I1209 23:59:56.224190   83859 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:59:56.225560   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:56.228550   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:56.228928   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:56.228950   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:56.229163   83859 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1209 23:59:56.233615   83859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:56.245890   83859 kubeadm.go:883] updating cluster {Name:no-preload-048296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-048296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:59:56.246079   83859 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:59:56.246132   83859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:56.285601   83859 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:59:56.285629   83859 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 23:59:56.285694   83859 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:56.285734   83859 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.285761   83859 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.285813   83859 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.285858   83859 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.285900   83859 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.285810   83859 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1209 23:59:56.285983   83859 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.287414   83859 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1209 23:59:56.287436   83859 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.287436   83859 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.287446   83859 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.287465   83859 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.287500   83859 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.287550   83859 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:56.287550   83859 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.437713   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.453590   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.457708   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.468937   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1209 23:59:56.474766   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.477321   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.483791   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.530686   83859 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1209 23:59:56.530735   83859 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.530786   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.603275   83859 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1209 23:59:56.603327   83859 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.603376   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.612591   83859 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1209 23:59:56.612635   83859 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.612686   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726597   83859 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1209 23:59:56.726643   83859 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.726650   83859 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1209 23:59:56.726682   83859 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.726692   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726728   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726752   83859 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1209 23:59:56.726777   83859 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.726808   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726824   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.726882   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.726889   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.795578   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.795624   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.795646   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.795581   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.795723   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.795847   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.905584   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.909282   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.927659   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.927832   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.927928   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.932487   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:53.256494   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:55.257888   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:54.991894   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:57.491500   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:55.645853   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.145844   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.644899   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:57.145431   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:57.645625   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:58.145096   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:58.645933   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:59.145181   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:59.645624   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:00.145874   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.971745   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1209 23:59:56.971844   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1209 23:59:56.987218   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1209 23:59:56.987380   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 23:59:57.050691   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:57.064778   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:57.087868   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:57.087972   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:57.089176   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1209 23:59:57.089235   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1209 23:59:57.089262   83859 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1209 23:59:57.089269   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 23:59:57.089311   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1209 23:59:57.089355   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1209 23:59:57.098939   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1209 23:59:57.099044   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 23:59:57.148482   83859 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1209 23:59:57.148532   83859 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:57.148588   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:57.173723   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1209 23:59:57.173810   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 23:59:57.186640   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1209 23:59:57.186734   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1210 00:00:00.723670   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.634331388s)
	I1210 00:00:00.723704   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1210 00:00:00.723717   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (3.634425647s)
	I1210 00:00:00.723751   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1210 00:00:00.723728   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 00:00:00.723767   83859 ssh_runner.go:235] Completed: which crictl: (3.575163822s)
	I1210 00:00:00.723815   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:00:00.723857   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (3.550033094s)
	I1210 00:00:00.723816   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 00:00:00.723909   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (3.537159613s)
	I1210 00:00:00.723934   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1210 00:00:00.723854   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (3.624679414s)
	I1210 00:00:00.723974   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1210 00:00:00.723880   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1209 23:59:57.756271   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:00.256451   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:02.256956   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:59.491859   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:01.992860   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:00.644886   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:01.145063   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:01.645932   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.145743   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.645396   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:03.145007   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:03.645927   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:04.145876   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:04.645825   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:05.145906   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.713826   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.989917623s)
	I1210 00:00:02.713866   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1210 00:00:02.713883   83859 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.99004494s)
	I1210 00:00:02.713896   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 00:00:02.713949   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:00:02.713952   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 00:00:02.753832   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:00:04.780306   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.066273178s)
	I1210 00:00:04.780344   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1210 00:00:04.780368   83859 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1210 00:00:04.780432   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1210 00:00:04.780430   83859 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.026560705s)
	I1210 00:00:04.780544   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 00:00:04.780618   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:00:06.756731   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.976269529s)
	I1210 00:00:06.756763   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1210 00:00:06.756774   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.976141757s)
	I1210 00:00:06.756793   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1210 00:00:06.756791   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 00:00:06.756846   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 00:00:04.258390   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:06.756422   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:04.490429   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:06.991054   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:08.991247   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:05.644919   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:06.145142   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:06.645240   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:07.144864   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:07.645218   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.145246   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.645965   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:09.145663   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:09.645731   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:10.145638   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.712114   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.955245193s)
	I1210 00:00:08.712142   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1210 00:00:08.712166   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 00:00:08.712212   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 00:00:10.892294   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.180058873s)
	I1210 00:00:10.892319   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1210 00:00:10.892352   83859 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:00:10.892391   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:00:11.542174   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 00:00:11.542214   83859 cache_images.go:123] Successfully loaded all cached images
	I1210 00:00:11.542219   83859 cache_images.go:92] duration metric: took 15.256564286s to LoadCachedImages
	I1210 00:00:11.542230   83859 kubeadm.go:934] updating node { 192.168.61.182 8443 v1.31.2 crio true true} ...
	I1210 00:00:11.542334   83859 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-048296 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-048296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:00:11.542397   83859 ssh_runner.go:195] Run: crio config
	I1210 00:00:11.586768   83859 cni.go:84] Creating CNI manager for ""
	I1210 00:00:11.586787   83859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:00:11.586795   83859 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:00:11.586817   83859 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.182 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-048296 NodeName:no-preload-048296 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:00:11.586932   83859 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-048296"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.182"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.182"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:00:11.586992   83859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:00:11.597936   83859 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:00:11.597996   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 00:00:11.608068   83859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 00:00:11.624881   83859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:00:11.640934   83859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1210 00:00:11.658113   83859 ssh_runner.go:195] Run: grep 192.168.61.182	control-plane.minikube.internal$ /etc/hosts
	I1210 00:00:11.662360   83859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:00:11.674732   83859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:00:11.803743   83859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:00:11.821169   83859 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296 for IP: 192.168.61.182
	I1210 00:00:11.821191   83859 certs.go:194] generating shared ca certs ...
	I1210 00:00:11.821211   83859 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:00:11.821404   83859 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1210 00:00:11.821485   83859 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1210 00:00:11.821502   83859 certs.go:256] generating profile certs ...
	I1210 00:00:11.821583   83859 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/client.key
	I1210 00:00:11.821644   83859 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/apiserver.key.9304569d
	I1210 00:00:11.821677   83859 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/proxy-client.key
	I1210 00:00:11.821783   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1210 00:00:11.821811   83859 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1210 00:00:11.821821   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:00:11.821848   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1210 00:00:11.821881   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:00:11.821917   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1210 00:00:11.821965   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1210 00:00:11.822649   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:00:11.867065   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 00:00:11.898744   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:00:11.932711   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:00:08.758664   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:11.257156   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:11.491011   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:13.491036   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:10.645462   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.145646   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.645921   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:12.145804   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:12.644924   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:13.145055   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:13.645811   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:14.144877   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:14.645709   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.145827   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.966670   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 00:00:11.997827   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:00:12.023344   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:00:12.048872   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:00:12.074332   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:00:12.097886   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1210 00:00:12.121883   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1210 00:00:12.145451   83859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:00:12.167089   83859 ssh_runner.go:195] Run: openssl version
	I1210 00:00:12.172747   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1210 00:00:12.183718   83859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1210 00:00:12.188458   83859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1210 00:00:12.188521   83859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1210 00:00:12.194537   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:00:12.205725   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:00:12.216441   83859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:00:12.221179   83859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:00:12.221237   83859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:00:12.226942   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:00:12.237830   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1210 00:00:12.248188   83859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1210 00:00:12.253689   83859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1210 00:00:12.253747   83859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1210 00:00:12.259326   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1210 00:00:12.269796   83859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:00:12.274316   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 00:00:12.280263   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 00:00:12.286235   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 00:00:12.292330   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 00:00:12.298347   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 00:00:12.304186   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 00:00:12.310189   83859 kubeadm.go:392] StartCluster: {Name:no-preload-048296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-048296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:00:12.310296   83859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:00:12.310349   83859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:00:12.346589   83859 cri.go:89] found id: ""
	I1210 00:00:12.346666   83859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 00:00:12.357674   83859 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 00:00:12.357701   83859 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 00:00:12.357753   83859 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 00:00:12.367817   83859 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 00:00:12.368827   83859 kubeconfig.go:125] found "no-preload-048296" server: "https://192.168.61.182:8443"
	I1210 00:00:12.371117   83859 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 00:00:12.380367   83859 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.182
	I1210 00:00:12.380393   83859 kubeadm.go:1160] stopping kube-system containers ...
	I1210 00:00:12.380404   83859 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 00:00:12.380446   83859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:00:12.413083   83859 cri.go:89] found id: ""
	I1210 00:00:12.413143   83859 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 00:00:12.429819   83859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:00:12.439244   83859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:00:12.439269   83859 kubeadm.go:157] found existing configuration files:
	
	I1210 00:00:12.439336   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:00:12.448182   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:00:12.448257   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:00:12.457759   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:00:12.467372   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:00:12.467449   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:00:12.476877   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:00:12.485831   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:00:12.485898   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:00:12.495537   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:00:12.504333   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:00:12.504398   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:00:12.514572   83859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:00:12.524134   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:12.619683   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.361844   83859 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.742129567s)
	I1210 00:00:14.361876   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.585241   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.653166   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.771204   83859 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:00:14.771306   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.272143   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.771676   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.789350   83859 api_server.go:72] duration metric: took 1.018150373s to wait for apiserver process to appear ...
	I1210 00:00:15.789378   83859 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:00:15.789396   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:15.789878   83859 api_server.go:269] stopped: https://192.168.61.182:8443/healthz: Get "https://192.168.61.182:8443/healthz": dial tcp 192.168.61.182:8443: connect: connection refused
	I1210 00:00:16.289518   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:13.757326   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:15.757843   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:15.491876   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:17.991160   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:18.572189   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 00:00:18.572233   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 00:00:18.572262   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:18.607691   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 00:00:18.607732   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 00:00:18.790007   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:18.794252   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 00:00:18.794281   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 00:00:19.289819   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:19.294079   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 00:00:19.294104   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 00:00:19.789647   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:19.794119   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 200:
	ok
	I1210 00:00:19.800447   83859 api_server.go:141] control plane version: v1.31.2
	I1210 00:00:19.800477   83859 api_server.go:131] duration metric: took 4.011091942s to wait for apiserver health ...
	I1210 00:00:19.800488   83859 cni.go:84] Creating CNI manager for ""
	I1210 00:00:19.800496   83859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:00:19.802341   83859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:00:15.645243   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:16.145027   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:16.645018   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:17.145001   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:17.644893   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:18.145189   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:18.645579   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.144946   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.645832   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:20.145634   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.803715   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:00:19.814324   83859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:00:19.861326   83859 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:00:19.873554   83859 system_pods.go:59] 8 kube-system pods found
	I1210 00:00:19.873588   83859 system_pods.go:61] "coredns-7c65d6cfc9-smnt7" [7d85cb49-3cbb-4133-acab-284ddf0b72f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 00:00:19.873597   83859 system_pods.go:61] "etcd-no-preload-048296" [3eeaede5-b759-49e5-8267-0aaf3c374178] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 00:00:19.873606   83859 system_pods.go:61] "kube-apiserver-no-preload-048296" [8e9d39ce-218a-4fe4-86ea-234d97a5e51d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 00:00:19.873615   83859 system_pods.go:61] "kube-controller-manager-no-preload-048296" [e28ccf7d-13e4-4b55-81d0-2dd348584bca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 00:00:19.873622   83859 system_pods.go:61] "kube-proxy-z479r" [eb6588a6-ac3c-4066-b69e-5613b03ade53] Running
	I1210 00:00:19.873634   83859 system_pods.go:61] "kube-scheduler-no-preload-048296" [0a184373-35d4-4ac6-92fb-65a4faf1c2e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 00:00:19.873646   83859 system_pods.go:61] "metrics-server-6867b74b74-sd58c" [19d64157-7b9b-4a39-8e0c-2c48729fbc93] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:00:19.873653   83859 system_pods.go:61] "storage-provisioner" [30b3ed5d-f843-4090-827f-e1ee6a7ea274] Running
	I1210 00:00:19.873661   83859 system_pods.go:74] duration metric: took 12.308897ms to wait for pod list to return data ...
	I1210 00:00:19.873668   83859 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:00:19.877459   83859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:00:19.877482   83859 node_conditions.go:123] node cpu capacity is 2
	I1210 00:00:19.877496   83859 node_conditions.go:105] duration metric: took 3.822698ms to run NodePressure ...
	I1210 00:00:19.877513   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:20.145838   83859 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 00:00:20.151038   83859 kubeadm.go:739] kubelet initialised
	I1210 00:00:20.151059   83859 kubeadm.go:740] duration metric: took 5.1997ms waiting for restarted kubelet to initialise ...
	I1210 00:00:20.151068   83859 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:00:20.157554   83859 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:20.162413   83859 pod_ready.go:98] node "no-preload-048296" hosting pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.162437   83859 pod_ready.go:82] duration metric: took 4.858113ms for pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace to be "Ready" ...
	E1210 00:00:20.162446   83859 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-048296" hosting pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.162452   83859 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:20.167285   83859 pod_ready.go:98] node "no-preload-048296" hosting pod "etcd-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.167312   83859 pod_ready.go:82] duration metric: took 4.84903ms for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	E1210 00:00:20.167321   83859 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-048296" hosting pod "etcd-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.167328   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:20.172365   83859 pod_ready.go:98] node "no-preload-048296" hosting pod "kube-apiserver-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.172386   83859 pod_ready.go:82] duration metric: took 5.052446ms for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	E1210 00:00:20.172395   83859 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-048296" hosting pod "kube-apiserver-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.172401   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:18.257283   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:20.756752   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:20.490469   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:22.990635   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:20.645094   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:21.145179   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:21.645829   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.145159   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.645390   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:23.145663   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:23.645205   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:24.145625   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:24.645299   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:25.144971   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.180104   83859 pod_ready.go:103] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:24.680524   83859 pod_ready.go:103] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:26.680818   83859 pod_ready.go:103] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:22.757621   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:25.257680   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:24.991719   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:27.490099   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:25.644977   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:26.145967   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:26.645297   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:27.144972   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:27.645030   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.145227   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.645104   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:29.144957   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:29.645516   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:30.145343   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.678941   83859 pod_ready.go:93] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:00:28.678972   83859 pod_ready.go:82] duration metric: took 8.506560777s for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:28.678985   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-z479r" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:28.684896   83859 pod_ready.go:93] pod "kube-proxy-z479r" in "kube-system" namespace has status "Ready":"True"
	I1210 00:00:28.684917   83859 pod_ready.go:82] duration metric: took 5.924915ms for pod "kube-proxy-z479r" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:28.684926   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:30.691918   83859 pod_ready.go:103] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:31.691333   83859 pod_ready.go:93] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:00:31.691362   83859 pod_ready.go:82] duration metric: took 3.006429416s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:31.691372   83859 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:27.756937   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:30.260931   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:29.990322   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:32.489835   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:30.645218   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:31.145393   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:31.645952   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:32.145695   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:32.644910   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.146012   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.645636   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:34.145664   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:34.645813   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:35.145365   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.697948   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:36.197360   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:32.756044   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:34.756585   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:36.756683   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:34.490676   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:36.491117   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:38.991231   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:35.645823   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:36.145410   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:36.645665   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:37.144994   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:37.645701   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.144880   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.645735   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:39.145806   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:39.645695   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:40.145692   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.198422   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:40.697971   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:39.256015   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:41.256607   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:41.490276   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:43.989182   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:40.644935   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:41.145320   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:41.645470   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:42.145714   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:42.644961   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:43.145687   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:43.644990   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:44.144958   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:44.645780   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:44.645870   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:44.680200   84547 cri.go:89] found id: ""
	I1210 00:00:44.680321   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.680334   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:44.680343   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:44.680413   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:44.715278   84547 cri.go:89] found id: ""
	I1210 00:00:44.715305   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.715312   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:44.715318   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:44.715377   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:44.749908   84547 cri.go:89] found id: ""
	I1210 00:00:44.749933   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.749941   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:44.749946   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:44.750008   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:44.786851   84547 cri.go:89] found id: ""
	I1210 00:00:44.786882   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.786893   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:44.786901   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:44.786966   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:44.830060   84547 cri.go:89] found id: ""
	I1210 00:00:44.830106   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.830117   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:44.830125   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:44.830191   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:44.865525   84547 cri.go:89] found id: ""
	I1210 00:00:44.865560   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.865571   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:44.865579   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:44.865643   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:44.902529   84547 cri.go:89] found id: ""
	I1210 00:00:44.902565   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.902575   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:44.902584   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:44.902647   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:44.938876   84547 cri.go:89] found id: ""
	I1210 00:00:44.938904   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.938914   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:44.938925   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:44.938939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:44.992533   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:44.992570   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:45.005990   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:45.006020   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:45.116774   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:45.116797   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:45.116810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:45.187376   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:45.187411   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:42.698555   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:45.198082   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:43.756559   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:45.756755   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:45.990399   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:47.991485   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:47.728964   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:47.742500   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:47.742560   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:47.775751   84547 cri.go:89] found id: ""
	I1210 00:00:47.775783   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.775793   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:47.775799   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:47.775848   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:47.810202   84547 cri.go:89] found id: ""
	I1210 00:00:47.810228   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.810235   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:47.810241   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:47.810302   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:47.844686   84547 cri.go:89] found id: ""
	I1210 00:00:47.844730   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.844739   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:47.844745   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:47.844802   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:47.884076   84547 cri.go:89] found id: ""
	I1210 00:00:47.884108   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.884119   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:47.884127   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:47.884188   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:47.923697   84547 cri.go:89] found id: ""
	I1210 00:00:47.923722   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.923729   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:47.923734   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:47.923791   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:47.955790   84547 cri.go:89] found id: ""
	I1210 00:00:47.955816   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.955824   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:47.955829   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:47.955890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:48.021501   84547 cri.go:89] found id: ""
	I1210 00:00:48.021529   84547 logs.go:282] 0 containers: []
	W1210 00:00:48.021537   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:48.021543   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:48.021592   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:48.054645   84547 cri.go:89] found id: ""
	I1210 00:00:48.054675   84547 logs.go:282] 0 containers: []
	W1210 00:00:48.054688   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:48.054699   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:48.054714   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:48.135706   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:48.135729   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:48.135746   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:48.212397   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:48.212438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:48.254002   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:48.254033   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:48.305858   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:48.305891   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:47.697503   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:49.698076   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:47.758210   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:50.257400   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:50.490507   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:52.989527   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:50.819644   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:50.834153   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:50.834233   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:50.872357   84547 cri.go:89] found id: ""
	I1210 00:00:50.872391   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.872402   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:50.872409   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:50.872471   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:50.909793   84547 cri.go:89] found id: ""
	I1210 00:00:50.909826   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.909839   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:50.909848   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:50.909915   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:50.947167   84547 cri.go:89] found id: ""
	I1210 00:00:50.947206   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.947219   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:50.947228   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:50.947304   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:50.987194   84547 cri.go:89] found id: ""
	I1210 00:00:50.987219   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.987228   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:50.987234   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:50.987287   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:51.027433   84547 cri.go:89] found id: ""
	I1210 00:00:51.027464   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.027476   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:51.027483   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:51.027586   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:51.062156   84547 cri.go:89] found id: ""
	I1210 00:00:51.062186   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.062200   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:51.062208   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:51.062267   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:51.099161   84547 cri.go:89] found id: ""
	I1210 00:00:51.099187   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.099195   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:51.099202   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:51.099249   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:51.134400   84547 cri.go:89] found id: ""
	I1210 00:00:51.134432   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.134445   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:51.134459   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:51.134474   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:51.171842   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:51.171869   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:51.222815   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:51.222854   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:51.236499   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:51.236537   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:51.307835   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:51.307856   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:51.307871   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:53.888771   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:53.902167   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:53.902234   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:53.940724   84547 cri.go:89] found id: ""
	I1210 00:00:53.940748   84547 logs.go:282] 0 containers: []
	W1210 00:00:53.940756   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:53.940767   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:53.940823   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:53.975076   84547 cri.go:89] found id: ""
	I1210 00:00:53.975111   84547 logs.go:282] 0 containers: []
	W1210 00:00:53.975122   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:53.975128   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:53.975191   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:54.011123   84547 cri.go:89] found id: ""
	I1210 00:00:54.011149   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.011157   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:54.011162   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:54.011207   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:54.043594   84547 cri.go:89] found id: ""
	I1210 00:00:54.043620   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.043628   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:54.043633   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:54.043679   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:54.076172   84547 cri.go:89] found id: ""
	I1210 00:00:54.076208   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.076219   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:54.076227   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:54.076292   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:54.110646   84547 cri.go:89] found id: ""
	I1210 00:00:54.110679   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.110691   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:54.110711   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:54.110784   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:54.145901   84547 cri.go:89] found id: ""
	I1210 00:00:54.145929   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.145940   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:54.145947   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:54.146007   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:54.178589   84547 cri.go:89] found id: ""
	I1210 00:00:54.178618   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.178629   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:54.178639   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:54.178652   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:54.231005   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:54.231040   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:54.244583   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:54.244608   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:54.318759   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:54.318787   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:54.318800   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:54.395975   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:54.396012   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:52.198012   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:54.199221   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:56.697929   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:52.756419   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:54.757807   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:57.257555   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:55.490621   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:57.491632   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:56.969699   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:56.985159   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:56.985232   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:57.020768   84547 cri.go:89] found id: ""
	I1210 00:00:57.020798   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.020807   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:57.020812   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:57.020861   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:57.055796   84547 cri.go:89] found id: ""
	I1210 00:00:57.055826   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.055834   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:57.055839   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:57.055897   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:57.089345   84547 cri.go:89] found id: ""
	I1210 00:00:57.089375   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.089385   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:57.089392   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:57.089460   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:57.123969   84547 cri.go:89] found id: ""
	I1210 00:00:57.123993   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.124004   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:57.124012   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:57.124075   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:57.158744   84547 cri.go:89] found id: ""
	I1210 00:00:57.158769   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.158777   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:57.158783   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:57.158842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:57.203933   84547 cri.go:89] found id: ""
	I1210 00:00:57.203954   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.203962   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:57.203968   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:57.204025   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:57.238180   84547 cri.go:89] found id: ""
	I1210 00:00:57.238213   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.238224   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:57.238231   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:57.238287   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:57.273583   84547 cri.go:89] found id: ""
	I1210 00:00:57.273612   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.273623   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:57.273633   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:57.273648   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:57.345992   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:57.346019   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:57.346035   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:57.428335   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:57.428369   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:57.466437   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:57.466472   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:57.517064   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:57.517099   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:00.030599   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:00.045151   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:00.045229   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:00.083050   84547 cri.go:89] found id: ""
	I1210 00:01:00.083074   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.083085   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:00.083093   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:00.083152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:00.119082   84547 cri.go:89] found id: ""
	I1210 00:01:00.119107   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.119120   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:00.119126   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:00.119185   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:00.166888   84547 cri.go:89] found id: ""
	I1210 00:01:00.166921   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.166931   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:00.166939   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:00.166998   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:00.209504   84547 cri.go:89] found id: ""
	I1210 00:01:00.209526   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.209533   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:00.209539   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:00.209595   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:00.247649   84547 cri.go:89] found id: ""
	I1210 00:01:00.247672   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.247680   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:00.247686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:00.247736   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:00.294419   84547 cri.go:89] found id: ""
	I1210 00:01:00.294445   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.294455   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:00.294463   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:00.294526   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:00.328634   84547 cri.go:89] found id: ""
	I1210 00:01:00.328667   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.328677   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:00.328684   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:00.328751   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:00.364667   84547 cri.go:89] found id: ""
	I1210 00:01:00.364696   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.364724   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:00.364733   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:00.364745   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:00.377518   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:00.377552   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:00.449145   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:00.449166   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:00.449178   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:58.698000   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:01.196968   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:59.756367   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:01.756407   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:59.989896   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:01.991121   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:00.529462   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:00.529499   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:00.570471   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:00.570503   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:03.122835   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:03.136329   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:03.136405   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:03.170352   84547 cri.go:89] found id: ""
	I1210 00:01:03.170380   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.170388   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:03.170393   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:03.170454   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:03.204339   84547 cri.go:89] found id: ""
	I1210 00:01:03.204368   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.204379   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:03.204386   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:03.204448   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:03.237516   84547 cri.go:89] found id: ""
	I1210 00:01:03.237559   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.237572   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:03.237579   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:03.237641   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:03.273379   84547 cri.go:89] found id: ""
	I1210 00:01:03.273407   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.273416   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:03.273421   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:03.274017   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:03.308987   84547 cri.go:89] found id: ""
	I1210 00:01:03.309026   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.309038   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:03.309046   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:03.309102   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:03.341913   84547 cri.go:89] found id: ""
	I1210 00:01:03.341943   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.341954   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:03.341961   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:03.342009   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:03.375403   84547 cri.go:89] found id: ""
	I1210 00:01:03.375429   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.375437   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:03.375442   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:03.375494   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:03.408352   84547 cri.go:89] found id: ""
	I1210 00:01:03.408380   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.408387   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:03.408397   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:03.408409   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:03.483789   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:03.483831   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:03.532021   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:03.532050   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:03.585095   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:03.585129   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:03.600062   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:03.600089   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:03.671633   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:03.197410   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:05.698110   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:03.757014   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:06.257259   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:04.490159   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:06.490493   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:08.991447   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:06.172345   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:06.199809   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:06.199897   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:06.240690   84547 cri.go:89] found id: ""
	I1210 00:01:06.240749   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.240760   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:06.240769   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:06.240822   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:06.275743   84547 cri.go:89] found id: ""
	I1210 00:01:06.275770   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.275779   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:06.275786   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:06.275848   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:06.310399   84547 cri.go:89] found id: ""
	I1210 00:01:06.310427   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.310438   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:06.310445   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:06.310507   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:06.345278   84547 cri.go:89] found id: ""
	I1210 00:01:06.345308   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.345320   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:06.345328   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:06.345394   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:06.381926   84547 cri.go:89] found id: ""
	I1210 00:01:06.381961   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.381988   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:06.381994   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:06.382064   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:06.415341   84547 cri.go:89] found id: ""
	I1210 00:01:06.415367   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.415377   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:06.415385   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:06.415438   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:06.449701   84547 cri.go:89] found id: ""
	I1210 00:01:06.449733   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.449743   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:06.449750   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:06.449814   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:06.484271   84547 cri.go:89] found id: ""
	I1210 00:01:06.484298   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.484307   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:06.484317   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:06.484333   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:06.570279   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:06.570313   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:06.607812   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:06.607845   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:06.658206   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:06.658243   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:06.670949   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:06.670978   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:06.745785   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:09.246367   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:09.259390   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:09.259462   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:09.291811   84547 cri.go:89] found id: ""
	I1210 00:01:09.291841   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.291853   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:09.291865   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:09.291922   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:09.326996   84547 cri.go:89] found id: ""
	I1210 00:01:09.327027   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.327038   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:09.327045   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:09.327109   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:09.361026   84547 cri.go:89] found id: ""
	I1210 00:01:09.361062   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.361073   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:09.361081   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:09.361152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:09.397116   84547 cri.go:89] found id: ""
	I1210 00:01:09.397148   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.397159   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:09.397166   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:09.397225   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:09.428989   84547 cri.go:89] found id: ""
	I1210 00:01:09.429025   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.429039   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:09.429046   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:09.429111   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:09.462491   84547 cri.go:89] found id: ""
	I1210 00:01:09.462535   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.462547   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:09.462572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:09.462652   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:09.502705   84547 cri.go:89] found id: ""
	I1210 00:01:09.502728   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.502735   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:09.502740   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:09.502798   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:09.538721   84547 cri.go:89] found id: ""
	I1210 00:01:09.538744   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.538754   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:09.538766   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:09.538779   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:09.590732   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:09.590768   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:09.604928   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:09.604960   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:09.681457   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:09.681484   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:09.681498   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:09.758116   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:09.758149   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:07.698904   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:10.198215   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:08.755738   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:10.756172   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:10.992252   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:13.491283   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:12.307016   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:12.320284   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:12.320371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:12.351984   84547 cri.go:89] found id: ""
	I1210 00:01:12.352007   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.352016   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:12.352024   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:12.352086   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:12.387797   84547 cri.go:89] found id: ""
	I1210 00:01:12.387829   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.387840   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:12.387848   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:12.387924   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:12.417934   84547 cri.go:89] found id: ""
	I1210 00:01:12.417968   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.417979   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:12.417987   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:12.418048   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:12.451771   84547 cri.go:89] found id: ""
	I1210 00:01:12.451802   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.451815   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:12.451822   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:12.451890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:12.485007   84547 cri.go:89] found id: ""
	I1210 00:01:12.485037   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.485048   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:12.485055   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:12.485117   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:12.527851   84547 cri.go:89] found id: ""
	I1210 00:01:12.527879   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.527895   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:12.527905   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:12.527967   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:12.560341   84547 cri.go:89] found id: ""
	I1210 00:01:12.560364   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.560372   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:12.560377   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:12.560428   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:12.593118   84547 cri.go:89] found id: ""
	I1210 00:01:12.593143   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.593150   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:12.593158   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:12.593169   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:12.658694   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:12.658717   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:12.658749   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:12.739742   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:12.739776   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:12.780949   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:12.780977   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:12.829570   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:12.829607   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:15.344239   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:15.358424   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:15.358527   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:15.390712   84547 cri.go:89] found id: ""
	I1210 00:01:15.390744   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.390757   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:15.390764   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:15.390847   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:15.424198   84547 cri.go:89] found id: ""
	I1210 00:01:15.424226   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.424234   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:15.424239   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:15.424306   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:15.458340   84547 cri.go:89] found id: ""
	I1210 00:01:15.458363   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.458370   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:15.458377   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:15.458422   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:15.491468   84547 cri.go:89] found id: ""
	I1210 00:01:15.491497   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.491507   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:15.491520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:15.491597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:12.698224   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:15.197841   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:12.756618   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:14.759492   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:17.256075   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:15.491494   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:17.989410   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:15.528405   84547 cri.go:89] found id: ""
	I1210 00:01:15.528437   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.528448   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:15.528455   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:15.528517   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:15.562966   84547 cri.go:89] found id: ""
	I1210 00:01:15.562995   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.563005   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:15.563012   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:15.563063   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:15.595815   84547 cri.go:89] found id: ""
	I1210 00:01:15.595838   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.595845   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:15.595850   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:15.595907   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:15.631296   84547 cri.go:89] found id: ""
	I1210 00:01:15.631322   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.631333   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:15.631347   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:15.631362   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:15.680177   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:15.680213   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:15.693685   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:15.693724   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:15.760285   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:15.760312   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:15.760326   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:15.837814   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:15.837855   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:18.377112   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:18.390167   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:18.390230   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:18.424095   84547 cri.go:89] found id: ""
	I1210 00:01:18.424127   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.424140   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:18.424150   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:18.424216   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:18.456840   84547 cri.go:89] found id: ""
	I1210 00:01:18.456868   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.456876   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:18.456882   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:18.456938   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:18.492109   84547 cri.go:89] found id: ""
	I1210 00:01:18.492134   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.492145   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:18.492153   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:18.492212   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:18.530445   84547 cri.go:89] found id: ""
	I1210 00:01:18.530472   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.530480   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:18.530486   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:18.530549   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:18.567036   84547 cri.go:89] found id: ""
	I1210 00:01:18.567060   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.567070   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:18.567077   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:18.567136   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:18.606829   84547 cri.go:89] found id: ""
	I1210 00:01:18.606853   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.606863   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:18.606870   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:18.606927   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:18.646032   84547 cri.go:89] found id: ""
	I1210 00:01:18.646061   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.646070   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:18.646075   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:18.646127   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:18.683969   84547 cri.go:89] found id: ""
	I1210 00:01:18.683997   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.684008   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:18.684019   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:18.684036   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:18.735760   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:18.735807   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:18.751689   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:18.751724   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:18.823252   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:18.823272   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:18.823287   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:18.908071   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:18.908110   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:17.698577   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:20.197901   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:19.757398   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:22.255920   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:20.490512   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:22.990168   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:21.445415   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:21.458091   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:21.458152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:21.492615   84547 cri.go:89] found id: ""
	I1210 00:01:21.492646   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.492657   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:21.492664   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:21.492718   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:21.529564   84547 cri.go:89] found id: ""
	I1210 00:01:21.529586   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.529594   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:21.529599   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:21.529669   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:21.566409   84547 cri.go:89] found id: ""
	I1210 00:01:21.566441   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.566450   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:21.566456   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:21.566528   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:21.601783   84547 cri.go:89] found id: ""
	I1210 00:01:21.601815   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.601827   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:21.601835   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:21.601895   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:21.635740   84547 cri.go:89] found id: ""
	I1210 00:01:21.635762   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.635769   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:21.635775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:21.635831   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:21.667777   84547 cri.go:89] found id: ""
	I1210 00:01:21.667806   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.667826   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:21.667834   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:21.667894   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:21.704355   84547 cri.go:89] found id: ""
	I1210 00:01:21.704380   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.704388   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:21.704398   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:21.704457   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:21.737365   84547 cri.go:89] found id: ""
	I1210 00:01:21.737398   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.737410   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:21.737422   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:21.737438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:21.751394   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:21.751434   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:21.823217   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:21.823240   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:21.823255   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:21.910097   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:21.910131   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:21.950087   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:21.950123   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:24.504626   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:24.517920   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:24.517997   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:24.551766   84547 cri.go:89] found id: ""
	I1210 00:01:24.551806   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.551814   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:24.551821   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:24.551880   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:24.588210   84547 cri.go:89] found id: ""
	I1210 00:01:24.588246   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.588256   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:24.588263   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:24.588341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:24.620569   84547 cri.go:89] found id: ""
	I1210 00:01:24.620598   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.620607   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:24.620613   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:24.620673   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:24.657527   84547 cri.go:89] found id: ""
	I1210 00:01:24.657550   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.657558   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:24.657564   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:24.657636   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:24.689382   84547 cri.go:89] found id: ""
	I1210 00:01:24.689410   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.689418   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:24.689423   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:24.689475   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:24.727170   84547 cri.go:89] found id: ""
	I1210 00:01:24.727207   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.727224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:24.727230   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:24.727280   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:24.760733   84547 cri.go:89] found id: ""
	I1210 00:01:24.760759   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.760769   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:24.760775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:24.760842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:24.794750   84547 cri.go:89] found id: ""
	I1210 00:01:24.794782   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.794791   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:24.794799   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:24.794810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:24.847403   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:24.847441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:24.862240   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:24.862273   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:24.936373   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:24.936396   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:24.936409   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:25.011126   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:25.011160   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:22.198527   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:24.697981   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:24.257466   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:26.258086   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:24.990792   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:26.991843   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:27.551134   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:27.564526   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:27.564665   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:27.600652   84547 cri.go:89] found id: ""
	I1210 00:01:27.600677   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.600685   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:27.600690   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:27.600753   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:27.643593   84547 cri.go:89] found id: ""
	I1210 00:01:27.643625   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.643636   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:27.643649   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:27.643705   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:27.680633   84547 cri.go:89] found id: ""
	I1210 00:01:27.680664   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.680676   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:27.680684   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:27.680748   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:27.721790   84547 cri.go:89] found id: ""
	I1210 00:01:27.721824   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.721832   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:27.721837   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:27.721901   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:27.756462   84547 cri.go:89] found id: ""
	I1210 00:01:27.756492   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.756504   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:27.756512   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:27.756574   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:27.792692   84547 cri.go:89] found id: ""
	I1210 00:01:27.792728   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.792838   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:27.792851   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:27.792912   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:27.830065   84547 cri.go:89] found id: ""
	I1210 00:01:27.830098   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.830107   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:27.830116   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:27.830182   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:27.867516   84547 cri.go:89] found id: ""
	I1210 00:01:27.867557   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.867580   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:27.867595   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:27.867611   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:27.917622   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:27.917660   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:27.931687   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:27.931717   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:28.003827   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:28.003860   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:28.003876   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:28.081375   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:28.081410   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:27.197999   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:29.198364   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:31.698061   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:28.755855   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:30.756687   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:29.489992   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:31.490850   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:33.990464   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:30.622885   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:30.635990   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:30.636065   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:30.671418   84547 cri.go:89] found id: ""
	I1210 00:01:30.671453   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.671465   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:30.671475   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:30.671548   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:30.707168   84547 cri.go:89] found id: ""
	I1210 00:01:30.707203   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.707214   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:30.707222   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:30.707275   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:30.741349   84547 cri.go:89] found id: ""
	I1210 00:01:30.741377   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.741386   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:30.741395   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:30.741449   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:30.774952   84547 cri.go:89] found id: ""
	I1210 00:01:30.774985   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.774997   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:30.775005   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:30.775062   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:30.809596   84547 cri.go:89] found id: ""
	I1210 00:01:30.809623   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.809631   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:30.809636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:30.809699   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:30.844256   84547 cri.go:89] found id: ""
	I1210 00:01:30.844288   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.844300   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:30.844308   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:30.844371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:30.876086   84547 cri.go:89] found id: ""
	I1210 00:01:30.876113   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.876124   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:30.876131   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:30.876195   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:30.910858   84547 cri.go:89] found id: ""
	I1210 00:01:30.910884   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.910895   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:30.910905   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:30.910920   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:30.990855   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:30.990876   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:30.990888   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:31.069186   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:31.069221   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:31.109653   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:31.109689   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:31.164962   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:31.165001   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:33.679784   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:33.692222   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:33.692310   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:33.727937   84547 cri.go:89] found id: ""
	I1210 00:01:33.727960   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.727967   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:33.727973   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:33.728022   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:33.762122   84547 cri.go:89] found id: ""
	I1210 00:01:33.762151   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.762162   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:33.762171   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:33.762237   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:33.793855   84547 cri.go:89] found id: ""
	I1210 00:01:33.793883   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.793894   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:33.793903   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:33.793961   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:33.827240   84547 cri.go:89] found id: ""
	I1210 00:01:33.827280   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.827292   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:33.827300   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:33.827366   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:33.863705   84547 cri.go:89] found id: ""
	I1210 00:01:33.863728   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.863738   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:33.863743   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:33.863792   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:33.897189   84547 cri.go:89] found id: ""
	I1210 00:01:33.897213   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.897224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:33.897229   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:33.897282   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:33.930001   84547 cri.go:89] found id: ""
	I1210 00:01:33.930034   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.930044   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:33.930052   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:33.930113   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:33.967330   84547 cri.go:89] found id: ""
	I1210 00:01:33.967359   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.967367   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:33.967378   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:33.967390   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:34.051952   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:34.051996   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:34.087729   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:34.087762   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:34.137879   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:34.137915   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:34.151989   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:34.152025   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:34.225587   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:33.698633   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:36.197023   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:33.257021   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:35.258050   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:36.489900   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:38.490643   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:36.726065   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:36.740513   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:36.740594   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:36.774959   84547 cri.go:89] found id: ""
	I1210 00:01:36.774991   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.775000   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:36.775005   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:36.775070   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:36.808888   84547 cri.go:89] found id: ""
	I1210 00:01:36.808921   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.808934   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:36.808941   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:36.809001   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:36.841695   84547 cri.go:89] found id: ""
	I1210 00:01:36.841727   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.841738   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:36.841748   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:36.841840   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:36.877479   84547 cri.go:89] found id: ""
	I1210 00:01:36.877509   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.877522   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:36.877531   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:36.877600   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:36.909232   84547 cri.go:89] found id: ""
	I1210 00:01:36.909257   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.909265   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:36.909271   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:36.909328   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:36.945858   84547 cri.go:89] found id: ""
	I1210 00:01:36.945892   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.945904   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:36.945912   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:36.945981   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:36.978559   84547 cri.go:89] found id: ""
	I1210 00:01:36.978592   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.978604   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:36.978611   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:36.978674   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:37.015523   84547 cri.go:89] found id: ""
	I1210 00:01:37.015555   84547 logs.go:282] 0 containers: []
	W1210 00:01:37.015587   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:37.015598   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:37.015613   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:37.094825   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:37.094876   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:37.134442   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:37.134470   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:37.184691   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:37.184728   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:37.198801   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:37.198828   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:37.268415   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:39.768790   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:39.783106   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:39.783184   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:39.816629   84547 cri.go:89] found id: ""
	I1210 00:01:39.816655   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.816665   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:39.816672   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:39.816749   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:39.851518   84547 cri.go:89] found id: ""
	I1210 00:01:39.851550   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.851581   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:39.851590   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:39.851648   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:39.887790   84547 cri.go:89] found id: ""
	I1210 00:01:39.887827   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.887842   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:39.887852   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:39.887924   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:39.922230   84547 cri.go:89] found id: ""
	I1210 00:01:39.922255   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.922262   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:39.922268   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:39.922332   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:39.957142   84547 cri.go:89] found id: ""
	I1210 00:01:39.957174   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.957184   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:39.957192   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:39.957254   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:40.004583   84547 cri.go:89] found id: ""
	I1210 00:01:40.004609   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.004618   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:40.004624   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:40.004675   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:40.038278   84547 cri.go:89] found id: ""
	I1210 00:01:40.038300   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.038308   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:40.038313   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:40.038366   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:40.071920   84547 cri.go:89] found id: ""
	I1210 00:01:40.071947   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.071954   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:40.071963   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:40.071973   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:40.142005   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:40.142033   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:40.142049   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:40.219413   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:40.219452   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:40.260786   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:40.260822   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:40.315943   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:40.315988   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:38.198247   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:40.198455   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:37.756243   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:39.756567   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:41.757107   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:40.990316   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:42.990948   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:42.832592   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:42.847532   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:42.847630   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:42.879313   84547 cri.go:89] found id: ""
	I1210 00:01:42.879341   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.879348   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:42.879354   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:42.879403   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:42.914839   84547 cri.go:89] found id: ""
	I1210 00:01:42.914866   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.914877   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:42.914884   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:42.914947   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:42.949514   84547 cri.go:89] found id: ""
	I1210 00:01:42.949553   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.949564   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:42.949572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:42.949632   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:42.987974   84547 cri.go:89] found id: ""
	I1210 00:01:42.988004   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.988021   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:42.988029   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:42.988087   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:43.022835   84547 cri.go:89] found id: ""
	I1210 00:01:43.022860   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.022867   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:43.022873   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:43.022921   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:43.055940   84547 cri.go:89] found id: ""
	I1210 00:01:43.055967   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.055975   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:43.055981   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:43.056030   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:43.088728   84547 cri.go:89] found id: ""
	I1210 00:01:43.088754   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.088762   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:43.088769   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:43.088827   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:43.122809   84547 cri.go:89] found id: ""
	I1210 00:01:43.122842   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.122853   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:43.122865   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:43.122881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:43.172243   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:43.172277   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:43.186566   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:43.186596   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:43.264301   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:43.264327   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:43.264341   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:43.339804   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:43.339848   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:42.698066   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:45.197813   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:44.256069   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:46.256746   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:45.490253   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:47.989687   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:45.881356   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:45.894702   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:45.894779   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:45.928927   84547 cri.go:89] found id: ""
	I1210 00:01:45.928951   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.928958   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:45.928964   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:45.929009   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:45.965271   84547 cri.go:89] found id: ""
	I1210 00:01:45.965303   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.965315   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:45.965323   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:45.965392   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:45.999059   84547 cri.go:89] found id: ""
	I1210 00:01:45.999082   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.999090   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:45.999095   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:45.999140   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:46.034425   84547 cri.go:89] found id: ""
	I1210 00:01:46.034456   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.034468   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:46.034476   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:46.034529   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:46.067946   84547 cri.go:89] found id: ""
	I1210 00:01:46.067970   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.067986   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:46.067993   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:46.068056   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:46.101671   84547 cri.go:89] found id: ""
	I1210 00:01:46.101699   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.101710   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:46.101718   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:46.101783   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:46.135860   84547 cri.go:89] found id: ""
	I1210 00:01:46.135885   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.135893   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:46.135898   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:46.135948   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:46.177398   84547 cri.go:89] found id: ""
	I1210 00:01:46.177439   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.177450   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:46.177461   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:46.177476   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:46.248134   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:46.248160   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:46.248175   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:46.323652   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:46.323688   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:46.364017   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:46.364044   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:46.429480   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:46.429524   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:48.956518   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:48.970209   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:48.970294   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:49.008933   84547 cri.go:89] found id: ""
	I1210 00:01:49.008966   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.008977   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:49.008986   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:49.009050   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:49.044802   84547 cri.go:89] found id: ""
	I1210 00:01:49.044833   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.044843   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:49.044850   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:49.044921   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:49.077484   84547 cri.go:89] found id: ""
	I1210 00:01:49.077517   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.077525   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:49.077530   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:49.077576   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:49.110144   84547 cri.go:89] found id: ""
	I1210 00:01:49.110174   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.110186   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:49.110193   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:49.110255   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:49.142591   84547 cri.go:89] found id: ""
	I1210 00:01:49.142622   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.142633   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:49.142646   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:49.142709   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:49.175456   84547 cri.go:89] found id: ""
	I1210 00:01:49.175486   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.175497   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:49.175505   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:49.175603   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:49.208341   84547 cri.go:89] found id: ""
	I1210 00:01:49.208368   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.208379   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:49.208387   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:49.208445   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:49.241486   84547 cri.go:89] found id: ""
	I1210 00:01:49.241509   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.241518   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:49.241615   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:49.241647   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:49.280023   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:49.280051   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:49.328822   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:49.328858   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:49.343076   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:49.343104   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:49.418010   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:49.418037   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:49.418051   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:47.198917   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:49.697840   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:48.757230   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:51.256927   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:49.990701   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:52.490649   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:52.004053   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:52.017350   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:52.017418   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:52.055617   84547 cri.go:89] found id: ""
	I1210 00:01:52.055641   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.055648   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:52.055654   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:52.055712   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:52.088601   84547 cri.go:89] found id: ""
	I1210 00:01:52.088629   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.088637   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:52.088642   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:52.088694   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:52.121880   84547 cri.go:89] found id: ""
	I1210 00:01:52.121912   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.121922   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:52.121928   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:52.121986   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:52.161281   84547 cri.go:89] found id: ""
	I1210 00:01:52.161321   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.161334   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:52.161341   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:52.161406   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:52.222768   84547 cri.go:89] found id: ""
	I1210 00:01:52.222793   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.222800   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:52.222806   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:52.222862   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:52.271441   84547 cri.go:89] found id: ""
	I1210 00:01:52.271465   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.271473   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:52.271479   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:52.271526   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:52.311128   84547 cri.go:89] found id: ""
	I1210 00:01:52.311152   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.311160   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:52.311165   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:52.311211   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:52.348862   84547 cri.go:89] found id: ""
	I1210 00:01:52.348885   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.348892   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:52.348900   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:52.348913   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:52.401280   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:52.401324   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:52.415532   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:52.415580   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:52.484956   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:52.484979   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:52.484994   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:52.565102   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:52.565137   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:55.106446   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:55.121756   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:55.121846   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:55.158601   84547 cri.go:89] found id: ""
	I1210 00:01:55.158632   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.158643   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:55.158650   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:55.158712   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:55.192424   84547 cri.go:89] found id: ""
	I1210 00:01:55.192454   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.192464   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:55.192471   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:55.192530   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:55.226178   84547 cri.go:89] found id: ""
	I1210 00:01:55.226204   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.226213   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:55.226222   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:55.226285   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:55.264123   84547 cri.go:89] found id: ""
	I1210 00:01:55.264148   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.264161   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:55.264169   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:55.264226   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:55.302476   84547 cri.go:89] found id: ""
	I1210 00:01:55.302503   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.302512   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:55.302520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:55.302597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:55.342308   84547 cri.go:89] found id: ""
	I1210 00:01:55.342341   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.342352   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:55.342360   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:55.342419   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:55.382203   84547 cri.go:89] found id: ""
	I1210 00:01:55.382226   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.382232   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:55.382238   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:55.382286   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:55.421381   84547 cri.go:89] found id: ""
	I1210 00:01:55.421409   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.421421   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:55.421432   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:55.421449   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:55.473758   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:55.473793   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:55.488138   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:55.488166   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:01:52.198005   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:54.697589   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:56.697748   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:53.756310   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:55.756964   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:54.990271   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:57.490303   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	W1210 00:01:55.567216   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:55.567240   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:55.567251   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:55.648276   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:55.648319   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:58.185245   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:58.199015   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:58.199091   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:58.232327   84547 cri.go:89] found id: ""
	I1210 00:01:58.232352   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.232360   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:58.232368   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:58.232436   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:58.270312   84547 cri.go:89] found id: ""
	I1210 00:01:58.270342   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.270353   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:58.270360   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:58.270420   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:58.303389   84547 cri.go:89] found id: ""
	I1210 00:01:58.303415   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.303422   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:58.303427   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:58.303486   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:58.338703   84547 cri.go:89] found id: ""
	I1210 00:01:58.338735   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.338747   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:58.338755   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:58.338817   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:58.376727   84547 cri.go:89] found id: ""
	I1210 00:01:58.376759   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.376770   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:58.376779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:58.376841   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:58.410192   84547 cri.go:89] found id: ""
	I1210 00:01:58.410217   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.410224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:58.410230   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:58.410286   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:58.449772   84547 cri.go:89] found id: ""
	I1210 00:01:58.449794   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.449802   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:58.449807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:58.449859   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:58.484285   84547 cri.go:89] found id: ""
	I1210 00:01:58.484316   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.484328   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:58.484339   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:58.484356   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:58.538402   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:58.538438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:58.551361   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:58.551391   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:58.613809   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:58.613836   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:58.613850   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:58.689606   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:58.689640   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:58.698283   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:01.197599   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:57.758229   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:00.256339   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:59.490413   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:01.491035   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:03.990725   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:01.230924   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:01.244878   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:01.244990   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:01.280475   84547 cri.go:89] found id: ""
	I1210 00:02:01.280501   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.280509   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:01.280514   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:01.280561   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:01.323042   84547 cri.go:89] found id: ""
	I1210 00:02:01.323065   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.323071   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:01.323077   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:01.323124   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:01.356146   84547 cri.go:89] found id: ""
	I1210 00:02:01.356171   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.356181   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:01.356190   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:01.356247   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:01.392542   84547 cri.go:89] found id: ""
	I1210 00:02:01.392567   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.392577   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:01.392584   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:01.392721   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:01.426232   84547 cri.go:89] found id: ""
	I1210 00:02:01.426257   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.426268   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:01.426275   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:01.426341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:01.459754   84547 cri.go:89] found id: ""
	I1210 00:02:01.459786   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.459798   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:01.459806   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:01.459865   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:01.492406   84547 cri.go:89] found id: ""
	I1210 00:02:01.492435   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.492445   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:01.492450   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:01.492499   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:01.532012   84547 cri.go:89] found id: ""
	I1210 00:02:01.532034   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.532042   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:01.532049   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:01.532060   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:01.583145   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:01.583181   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:01.596910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:01.596939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:01.670480   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:01.670506   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:01.670534   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:01.748001   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:01.748041   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:04.291065   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:04.304507   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:04.304587   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:04.341673   84547 cri.go:89] found id: ""
	I1210 00:02:04.341700   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.341713   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:04.341720   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:04.341772   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:04.379743   84547 cri.go:89] found id: ""
	I1210 00:02:04.379776   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.379787   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:04.379795   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:04.379856   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:04.413532   84547 cri.go:89] found id: ""
	I1210 00:02:04.413562   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.413573   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:04.413588   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:04.413648   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:04.449192   84547 cri.go:89] found id: ""
	I1210 00:02:04.449221   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.449231   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:04.449238   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:04.449324   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:04.484638   84547 cri.go:89] found id: ""
	I1210 00:02:04.484666   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.484677   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:04.484686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:04.484745   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:04.524854   84547 cri.go:89] found id: ""
	I1210 00:02:04.524889   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.524903   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:04.524912   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:04.524976   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:04.564697   84547 cri.go:89] found id: ""
	I1210 00:02:04.564726   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.564737   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:04.564748   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:04.564797   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:04.603506   84547 cri.go:89] found id: ""
	I1210 00:02:04.603534   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.603544   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:04.603554   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:04.603583   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:04.653025   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:04.653062   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:04.666833   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:04.666878   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:04.745491   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:04.745513   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:04.745526   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:04.825267   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:04.825304   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:03.702878   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:06.197834   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:02.755541   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:04.757334   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:07.256160   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:06.490491   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:08.491144   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:07.365419   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:07.378968   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:07.379030   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:07.412830   84547 cri.go:89] found id: ""
	I1210 00:02:07.412859   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.412868   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:07.412873   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:07.412938   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:07.445622   84547 cri.go:89] found id: ""
	I1210 00:02:07.445661   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.445674   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:07.445682   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:07.445742   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:07.480431   84547 cri.go:89] found id: ""
	I1210 00:02:07.480466   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.480474   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:07.480480   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:07.480533   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:07.518741   84547 cri.go:89] found id: ""
	I1210 00:02:07.518776   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.518790   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:07.518797   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:07.518860   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:07.552193   84547 cri.go:89] found id: ""
	I1210 00:02:07.552216   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.552223   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:07.552229   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:07.552275   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:07.585762   84547 cri.go:89] found id: ""
	I1210 00:02:07.585784   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.585792   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:07.585798   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:07.585843   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:07.618619   84547 cri.go:89] found id: ""
	I1210 00:02:07.618645   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.618653   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:07.618659   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:07.618709   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:07.653377   84547 cri.go:89] found id: ""
	I1210 00:02:07.653418   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.653428   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:07.653440   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:07.653456   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:07.709366   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:07.709401   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:07.723762   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:07.723792   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:07.804849   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:07.804869   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:07.804886   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:07.887117   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:07.887154   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:10.423120   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:10.436563   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:10.436628   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:10.470622   84547 cri.go:89] found id: ""
	I1210 00:02:10.470650   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.470658   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:10.470664   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:10.470735   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:10.506211   84547 cri.go:89] found id: ""
	I1210 00:02:10.506238   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.506250   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:10.506257   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:10.506368   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:08.198217   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:10.699364   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:09.256492   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:11.256897   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:10.990662   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:13.491635   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:10.541846   84547 cri.go:89] found id: ""
	I1210 00:02:10.541871   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.541879   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:10.541885   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:10.541952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:10.581391   84547 cri.go:89] found id: ""
	I1210 00:02:10.581416   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.581427   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:10.581435   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:10.581503   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:10.615172   84547 cri.go:89] found id: ""
	I1210 00:02:10.615206   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.615216   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:10.615223   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:10.615289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:10.650791   84547 cri.go:89] found id: ""
	I1210 00:02:10.650813   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.650821   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:10.650826   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:10.650876   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:10.685428   84547 cri.go:89] found id: ""
	I1210 00:02:10.685452   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.685460   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:10.685466   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:10.685524   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:10.719139   84547 cri.go:89] found id: ""
	I1210 00:02:10.719174   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.719186   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:10.719196   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:10.719211   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:10.732045   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:10.732073   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:10.805084   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:10.805111   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:10.805127   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:10.888301   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:10.888337   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:10.926005   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:10.926033   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:13.479317   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:13.494021   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:13.494089   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:13.527753   84547 cri.go:89] found id: ""
	I1210 00:02:13.527787   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.527799   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:13.527806   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:13.527862   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:13.560574   84547 cri.go:89] found id: ""
	I1210 00:02:13.560607   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.560618   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:13.560625   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:13.560688   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:13.595507   84547 cri.go:89] found id: ""
	I1210 00:02:13.595551   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.595584   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:13.595592   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:13.595657   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:13.629848   84547 cri.go:89] found id: ""
	I1210 00:02:13.629873   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.629884   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:13.629892   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:13.629952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:13.662407   84547 cri.go:89] found id: ""
	I1210 00:02:13.662436   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.662447   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:13.662454   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:13.662509   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:13.694892   84547 cri.go:89] found id: ""
	I1210 00:02:13.694921   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.694940   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:13.694949   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:13.695013   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:13.733248   84547 cri.go:89] found id: ""
	I1210 00:02:13.733327   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.733349   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:13.733358   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:13.733426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:13.771853   84547 cri.go:89] found id: ""
	I1210 00:02:13.771884   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.771894   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:13.771906   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:13.771920   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:13.846886   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:13.846913   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:13.846928   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:13.929722   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:13.929758   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:13.968401   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:13.968427   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:14.019770   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:14.019811   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:13.197726   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:15.198437   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:13.257271   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:15.755750   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:15.990729   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:18.490851   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:16.532794   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:16.547084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:16.547172   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:16.584046   84547 cri.go:89] found id: ""
	I1210 00:02:16.584073   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.584084   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:16.584091   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:16.584150   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:16.615981   84547 cri.go:89] found id: ""
	I1210 00:02:16.616012   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.616023   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:16.616030   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:16.616094   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:16.647939   84547 cri.go:89] found id: ""
	I1210 00:02:16.647967   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.647979   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:16.647986   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:16.648048   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:16.682585   84547 cri.go:89] found id: ""
	I1210 00:02:16.682620   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.682632   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:16.682640   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:16.682695   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:16.718593   84547 cri.go:89] found id: ""
	I1210 00:02:16.718620   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.718628   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:16.718634   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:16.718687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:16.752513   84547 cri.go:89] found id: ""
	I1210 00:02:16.752536   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.752543   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:16.752549   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:16.752598   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:16.787679   84547 cri.go:89] found id: ""
	I1210 00:02:16.787702   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.787710   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:16.787715   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:16.787777   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:16.821275   84547 cri.go:89] found id: ""
	I1210 00:02:16.821297   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.821305   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:16.821312   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:16.821322   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:16.872500   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:16.872533   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:16.885185   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:16.885212   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:16.962658   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:16.962679   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:16.962694   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:17.039689   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:17.039726   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:19.578060   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:19.590601   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:19.590675   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:19.622145   84547 cri.go:89] found id: ""
	I1210 00:02:19.622170   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.622179   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:19.622184   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:19.622231   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:19.653204   84547 cri.go:89] found id: ""
	I1210 00:02:19.653232   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.653243   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:19.653250   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:19.653317   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:19.685104   84547 cri.go:89] found id: ""
	I1210 00:02:19.685137   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.685148   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:19.685156   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:19.685213   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:19.719087   84547 cri.go:89] found id: ""
	I1210 00:02:19.719113   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.719121   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:19.719126   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:19.719176   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:19.753210   84547 cri.go:89] found id: ""
	I1210 00:02:19.753239   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.753250   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:19.753258   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:19.753317   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:19.787608   84547 cri.go:89] found id: ""
	I1210 00:02:19.787635   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.787645   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:19.787653   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:19.787718   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:19.823111   84547 cri.go:89] found id: ""
	I1210 00:02:19.823142   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.823154   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:19.823161   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:19.823221   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:19.858274   84547 cri.go:89] found id: ""
	I1210 00:02:19.858301   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.858312   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:19.858323   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:19.858344   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:19.905386   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:19.905420   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:19.918995   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:19.919026   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:19.990676   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:19.990700   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:19.990716   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:20.064396   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:20.064435   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:17.698109   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:20.197963   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:17.756633   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:19.757487   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:22.257036   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:20.990682   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:23.490383   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:22.604477   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:22.617408   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:22.617487   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:22.650144   84547 cri.go:89] found id: ""
	I1210 00:02:22.650178   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.650189   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:22.650197   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:22.650293   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:22.684342   84547 cri.go:89] found id: ""
	I1210 00:02:22.684367   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.684375   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:22.684380   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:22.684429   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:22.718168   84547 cri.go:89] found id: ""
	I1210 00:02:22.718194   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.718204   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:22.718211   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:22.718271   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:22.755192   84547 cri.go:89] found id: ""
	I1210 00:02:22.755222   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.755232   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:22.755240   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:22.755297   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:22.793095   84547 cri.go:89] found id: ""
	I1210 00:02:22.793129   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.793141   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:22.793149   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:22.793209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:22.831116   84547 cri.go:89] found id: ""
	I1210 00:02:22.831146   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.831157   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:22.831164   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:22.831235   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:22.869652   84547 cri.go:89] found id: ""
	I1210 00:02:22.869686   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.869704   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:22.869709   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:22.869756   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:22.907480   84547 cri.go:89] found id: ""
	I1210 00:02:22.907504   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.907513   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:22.907520   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:22.907533   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:22.983880   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:22.983902   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:22.983915   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:23.062840   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:23.062880   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:23.101427   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:23.101476   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:23.153861   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:23.153893   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:22.198638   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:24.698708   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:24.757526   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:27.256296   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:25.491835   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:27.990755   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:25.666908   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:25.680047   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:25.680125   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:25.715821   84547 cri.go:89] found id: ""
	I1210 00:02:25.715853   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.715865   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:25.715873   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:25.715931   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:25.748191   84547 cri.go:89] found id: ""
	I1210 00:02:25.748222   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.748232   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:25.748239   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:25.748295   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:25.786474   84547 cri.go:89] found id: ""
	I1210 00:02:25.786498   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.786505   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:25.786511   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:25.786569   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:25.819282   84547 cri.go:89] found id: ""
	I1210 00:02:25.819319   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.819330   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:25.819337   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:25.819400   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:25.858064   84547 cri.go:89] found id: ""
	I1210 00:02:25.858091   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.858100   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:25.858106   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:25.858169   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:25.893335   84547 cri.go:89] found id: ""
	I1210 00:02:25.893362   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.893373   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:25.893380   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:25.893439   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:25.927225   84547 cri.go:89] found id: ""
	I1210 00:02:25.927254   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.927265   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:25.927272   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:25.927341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:25.963678   84547 cri.go:89] found id: ""
	I1210 00:02:25.963715   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.963725   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:25.963738   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:25.963756   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:25.994462   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:25.994488   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:26.061394   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:26.061442   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:26.061458   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:26.135152   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:26.135187   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:26.171961   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:26.171994   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:28.723326   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:28.735721   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:28.735787   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:28.769493   84547 cri.go:89] found id: ""
	I1210 00:02:28.769516   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.769525   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:28.769530   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:28.769582   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:28.805594   84547 cri.go:89] found id: ""
	I1210 00:02:28.805641   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.805652   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:28.805658   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:28.805704   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:28.838982   84547 cri.go:89] found id: ""
	I1210 00:02:28.839007   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.839015   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:28.839020   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:28.839072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:28.876607   84547 cri.go:89] found id: ""
	I1210 00:02:28.876627   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.876635   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:28.876641   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:28.876700   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:28.910352   84547 cri.go:89] found id: ""
	I1210 00:02:28.910379   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.910386   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:28.910392   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:28.910464   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:28.943896   84547 cri.go:89] found id: ""
	I1210 00:02:28.943917   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.943924   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:28.943930   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:28.943978   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:28.976554   84547 cri.go:89] found id: ""
	I1210 00:02:28.976583   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.976590   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:28.976596   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:28.976644   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:29.011334   84547 cri.go:89] found id: ""
	I1210 00:02:29.011357   84547 logs.go:282] 0 containers: []
	W1210 00:02:29.011364   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:29.011372   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:29.011384   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:29.061418   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:29.061464   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:29.074887   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:29.074913   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:29.147240   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:29.147261   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:29.147272   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:29.225058   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:29.225094   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:27.197730   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:29.197818   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:31.198788   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:29.256802   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:31.258194   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:30.490822   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:32.990271   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:31.763432   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:31.777257   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:31.777359   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:31.812558   84547 cri.go:89] found id: ""
	I1210 00:02:31.812588   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.812598   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:31.812606   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:31.812668   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:31.847042   84547 cri.go:89] found id: ""
	I1210 00:02:31.847065   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.847082   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:31.847088   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:31.847135   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:31.881182   84547 cri.go:89] found id: ""
	I1210 00:02:31.881208   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.881216   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:31.881221   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:31.881272   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:31.917367   84547 cri.go:89] found id: ""
	I1210 00:02:31.917393   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.917401   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:31.917407   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:31.917454   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:31.956836   84547 cri.go:89] found id: ""
	I1210 00:02:31.956868   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.956883   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:31.956893   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:31.956952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:31.993125   84547 cri.go:89] found id: ""
	I1210 00:02:31.993151   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.993160   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:31.993168   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:31.993225   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:32.031648   84547 cri.go:89] found id: ""
	I1210 00:02:32.031679   84547 logs.go:282] 0 containers: []
	W1210 00:02:32.031687   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:32.031692   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:32.031746   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:32.065894   84547 cri.go:89] found id: ""
	I1210 00:02:32.065923   84547 logs.go:282] 0 containers: []
	W1210 00:02:32.065930   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:32.065941   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:32.065957   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:32.133473   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:32.133496   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:32.133508   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:32.213129   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:32.213161   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:32.251424   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:32.251453   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:32.302284   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:32.302323   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:34.815963   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:34.829460   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:34.829543   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:34.865811   84547 cri.go:89] found id: ""
	I1210 00:02:34.865837   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.865847   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:34.865854   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:34.865916   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:34.899185   84547 cri.go:89] found id: ""
	I1210 00:02:34.899211   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.899220   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:34.899227   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:34.899289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:34.932472   84547 cri.go:89] found id: ""
	I1210 00:02:34.932500   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.932509   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:34.932517   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:34.932581   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:34.965817   84547 cri.go:89] found id: ""
	I1210 00:02:34.965846   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.965857   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:34.965866   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:34.965930   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:35.000036   84547 cri.go:89] found id: ""
	I1210 00:02:35.000066   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.000077   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:35.000084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:35.000139   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:35.033808   84547 cri.go:89] found id: ""
	I1210 00:02:35.033839   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.033850   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:35.033857   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:35.033916   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:35.066242   84547 cri.go:89] found id: ""
	I1210 00:02:35.066269   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.066278   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:35.066285   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:35.066349   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:35.100740   84547 cri.go:89] found id: ""
	I1210 00:02:35.100763   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.100771   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:35.100779   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:35.100792   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:35.155483   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:35.155520   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:35.168910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:35.168939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:35.243234   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:35.243252   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:35.243263   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:35.320622   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:35.320657   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:33.698543   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:36.198250   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:33.756108   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:35.757682   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:34.990969   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:37.495166   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:37.855684   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:37.869056   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:37.869156   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:37.903720   84547 cri.go:89] found id: ""
	I1210 00:02:37.903748   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.903759   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:37.903766   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:37.903859   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:37.937741   84547 cri.go:89] found id: ""
	I1210 00:02:37.937780   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.937791   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:37.937808   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:37.937869   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:37.975255   84547 cri.go:89] found id: ""
	I1210 00:02:37.975281   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.975292   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:37.975299   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:37.975359   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:38.017348   84547 cri.go:89] found id: ""
	I1210 00:02:38.017381   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.017393   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:38.017400   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:38.017460   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:38.051039   84547 cri.go:89] found id: ""
	I1210 00:02:38.051065   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.051073   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:38.051079   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:38.051129   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:38.085752   84547 cri.go:89] found id: ""
	I1210 00:02:38.085782   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.085791   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:38.085799   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:38.085858   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:38.119975   84547 cri.go:89] found id: ""
	I1210 00:02:38.120004   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.120014   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:38.120021   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:38.120086   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:38.161467   84547 cri.go:89] found id: ""
	I1210 00:02:38.161499   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.161526   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:38.161537   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:38.161551   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:38.222277   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:38.222314   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:38.239300   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:38.239332   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:38.308997   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:38.309016   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:38.309032   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:38.394064   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:38.394108   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:38.198484   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:40.697916   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:38.257723   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:40.756530   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:39.990445   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:41.993952   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:40.933406   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:40.945862   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:40.945937   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:40.979503   84547 cri.go:89] found id: ""
	I1210 00:02:40.979532   84547 logs.go:282] 0 containers: []
	W1210 00:02:40.979540   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:40.979545   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:40.979619   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:41.016758   84547 cri.go:89] found id: ""
	I1210 00:02:41.016792   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.016803   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:41.016811   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:41.016873   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:41.053562   84547 cri.go:89] found id: ""
	I1210 00:02:41.053593   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.053601   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:41.053607   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:41.053667   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:41.086713   84547 cri.go:89] found id: ""
	I1210 00:02:41.086745   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.086757   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:41.086767   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:41.086830   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:41.122901   84547 cri.go:89] found id: ""
	I1210 00:02:41.122935   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.122945   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:41.122952   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:41.123011   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:41.156306   84547 cri.go:89] found id: ""
	I1210 00:02:41.156337   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.156355   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:41.156362   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:41.156423   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:41.189840   84547 cri.go:89] found id: ""
	I1210 00:02:41.189871   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.189882   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:41.189890   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:41.189946   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:41.223019   84547 cri.go:89] found id: ""
	I1210 00:02:41.223051   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.223061   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:41.223072   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:41.223088   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:41.275608   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:41.275640   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:41.289181   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:41.289210   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:41.358375   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:41.358404   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:41.358420   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:41.440214   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:41.440250   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:43.980600   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:43.993110   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:43.993165   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:44.026688   84547 cri.go:89] found id: ""
	I1210 00:02:44.026721   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.026732   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:44.026741   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:44.026796   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:44.062914   84547 cri.go:89] found id: ""
	I1210 00:02:44.062936   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.062943   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:44.062948   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:44.062999   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:44.105974   84547 cri.go:89] found id: ""
	I1210 00:02:44.106001   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.106009   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:44.106014   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:44.106061   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:44.140239   84547 cri.go:89] found id: ""
	I1210 00:02:44.140265   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.140274   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:44.140280   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:44.140338   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:44.175754   84547 cri.go:89] found id: ""
	I1210 00:02:44.175785   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.175796   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:44.175803   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:44.175870   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:44.211663   84547 cri.go:89] found id: ""
	I1210 00:02:44.211694   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.211705   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:44.211712   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:44.211776   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:44.244796   84547 cri.go:89] found id: ""
	I1210 00:02:44.244821   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.244831   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:44.244837   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:44.244898   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:44.282491   84547 cri.go:89] found id: ""
	I1210 00:02:44.282515   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.282528   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:44.282549   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:44.282562   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:44.335284   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:44.335328   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:44.349489   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:44.349530   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:44.418643   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:44.418668   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:44.418682   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:44.493901   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:44.493932   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:43.197494   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:45.198341   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:43.256947   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:45.257225   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:47.258073   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:44.491413   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:46.990521   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:48.991847   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:47.033132   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:47.046322   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:47.046403   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:47.078490   84547 cri.go:89] found id: ""
	I1210 00:02:47.078521   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.078533   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:47.078541   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:47.078602   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:47.111457   84547 cri.go:89] found id: ""
	I1210 00:02:47.111479   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.111487   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:47.111492   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:47.111538   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:47.144643   84547 cri.go:89] found id: ""
	I1210 00:02:47.144678   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.144689   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:47.144696   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:47.144757   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:47.178106   84547 cri.go:89] found id: ""
	I1210 00:02:47.178131   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.178141   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:47.178148   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:47.178213   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:47.215670   84547 cri.go:89] found id: ""
	I1210 00:02:47.215697   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.215712   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:47.215718   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:47.215767   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:47.248916   84547 cri.go:89] found id: ""
	I1210 00:02:47.248941   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.248948   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:47.248953   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:47.249002   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:47.287632   84547 cri.go:89] found id: ""
	I1210 00:02:47.287660   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.287671   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:47.287680   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:47.287745   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:47.327064   84547 cri.go:89] found id: ""
	I1210 00:02:47.327094   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.327103   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:47.327112   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:47.327126   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:47.341132   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:47.341176   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:47.417100   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:47.417121   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:47.417134   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:47.502612   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:47.502648   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:47.541312   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:47.541339   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:50.095403   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:50.108145   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:50.108202   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:50.140424   84547 cri.go:89] found id: ""
	I1210 00:02:50.140451   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.140462   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:50.140472   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:50.140532   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:50.173836   84547 cri.go:89] found id: ""
	I1210 00:02:50.173859   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.173866   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:50.173872   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:50.173928   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:50.213916   84547 cri.go:89] found id: ""
	I1210 00:02:50.213937   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.213944   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:50.213949   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:50.213997   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:50.246854   84547 cri.go:89] found id: ""
	I1210 00:02:50.246889   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.246899   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:50.246907   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:50.246956   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:50.281416   84547 cri.go:89] found id: ""
	I1210 00:02:50.281448   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.281456   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:50.281462   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:50.281511   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:50.313263   84547 cri.go:89] found id: ""
	I1210 00:02:50.313296   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.313308   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:50.313318   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:50.313385   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:50.347421   84547 cri.go:89] found id: ""
	I1210 00:02:50.347453   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.347463   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:50.347470   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:50.347544   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:50.383111   84547 cri.go:89] found id: ""
	I1210 00:02:50.383134   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.383142   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:50.383151   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:50.383162   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:50.421982   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:50.422013   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:50.475478   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:50.475520   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:50.489202   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:50.489256   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:02:47.199043   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:49.698055   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:51.698813   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:49.756585   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:51.757407   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:51.489831   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:53.490572   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	W1210 00:02:50.559501   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:50.559539   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:50.559552   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:53.136042   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:53.149149   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:53.149227   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:53.189323   84547 cri.go:89] found id: ""
	I1210 00:02:53.189349   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.189357   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:53.189365   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:53.189425   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:53.225235   84547 cri.go:89] found id: ""
	I1210 00:02:53.225269   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.225281   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:53.225288   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:53.225347   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:53.263465   84547 cri.go:89] found id: ""
	I1210 00:02:53.263492   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.263502   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:53.263510   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:53.263597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:53.297544   84547 cri.go:89] found id: ""
	I1210 00:02:53.297571   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.297583   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:53.297591   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:53.297656   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:53.331731   84547 cri.go:89] found id: ""
	I1210 00:02:53.331755   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.331762   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:53.331767   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:53.331815   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:53.367395   84547 cri.go:89] found id: ""
	I1210 00:02:53.367427   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.367440   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:53.367447   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:53.367511   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:53.403297   84547 cri.go:89] found id: ""
	I1210 00:02:53.403324   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.403332   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:53.403338   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:53.403398   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:53.437126   84547 cri.go:89] found id: ""
	I1210 00:02:53.437150   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.437158   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:53.437166   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:53.437177   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:53.489875   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:53.489914   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:53.503915   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:53.503940   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:53.578086   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:53.578114   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:53.578127   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:53.658463   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:53.658501   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:53.699332   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:56.197941   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:54.257053   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:56.258352   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:55.990548   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:58.489947   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:56.197093   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:56.211959   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:56.212020   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:56.250462   84547 cri.go:89] found id: ""
	I1210 00:02:56.250484   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.250493   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:56.250498   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:56.250552   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:56.286828   84547 cri.go:89] found id: ""
	I1210 00:02:56.286854   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.286865   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:56.286872   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:56.286939   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:56.320750   84547 cri.go:89] found id: ""
	I1210 00:02:56.320779   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.320787   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:56.320793   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:56.320840   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:56.355903   84547 cri.go:89] found id: ""
	I1210 00:02:56.355943   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.355954   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:56.355960   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:56.356026   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:56.395979   84547 cri.go:89] found id: ""
	I1210 00:02:56.396007   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.396018   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:56.396025   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:56.396081   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:56.431481   84547 cri.go:89] found id: ""
	I1210 00:02:56.431506   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.431514   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:56.431520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:56.431594   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:56.470318   84547 cri.go:89] found id: ""
	I1210 00:02:56.470349   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.470356   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:56.470361   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:56.470410   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:56.506388   84547 cri.go:89] found id: ""
	I1210 00:02:56.506411   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.506418   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:56.506429   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:56.506441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:56.558438   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:56.558483   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:56.571981   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:56.572004   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:56.634665   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:56.634697   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:56.634713   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:56.715663   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:56.715697   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:59.255305   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:59.268846   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:59.268930   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:59.302334   84547 cri.go:89] found id: ""
	I1210 00:02:59.302359   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.302366   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:59.302372   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:59.302426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:59.336380   84547 cri.go:89] found id: ""
	I1210 00:02:59.336409   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.336419   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:59.336425   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:59.336492   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:59.370179   84547 cri.go:89] found id: ""
	I1210 00:02:59.370201   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.370210   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:59.370214   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:59.370268   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:59.403199   84547 cri.go:89] found id: ""
	I1210 00:02:59.403222   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.403229   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:59.403236   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:59.403307   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:59.438650   84547 cri.go:89] found id: ""
	I1210 00:02:59.438673   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.438681   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:59.438686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:59.438736   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:59.473166   84547 cri.go:89] found id: ""
	I1210 00:02:59.473191   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.473199   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:59.473205   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:59.473264   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:59.506857   84547 cri.go:89] found id: ""
	I1210 00:02:59.506879   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.506888   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:59.506902   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:59.506963   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:59.551488   84547 cri.go:89] found id: ""
	I1210 00:02:59.551515   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.551526   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:59.551542   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:59.551557   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:59.605032   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:59.605069   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:59.619238   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:59.619271   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:59.690772   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:59.690798   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:59.690813   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:59.774424   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:59.774460   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:58.198128   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:00.698106   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:58.756596   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:01.257970   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:00.490268   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:02.990951   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:02.315240   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:02.329636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:02.329728   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:02.368571   84547 cri.go:89] found id: ""
	I1210 00:03:02.368599   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.368609   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:02.368621   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:02.368687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:02.406107   84547 cri.go:89] found id: ""
	I1210 00:03:02.406136   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.406148   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:02.406155   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:02.406219   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:02.443050   84547 cri.go:89] found id: ""
	I1210 00:03:02.443077   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.443091   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:02.443098   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:02.443146   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:02.484425   84547 cri.go:89] found id: ""
	I1210 00:03:02.484451   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.484461   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:02.484469   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:02.484536   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:02.525624   84547 cri.go:89] found id: ""
	I1210 00:03:02.525647   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.525655   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:02.525661   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:02.525711   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:02.564808   84547 cri.go:89] found id: ""
	I1210 00:03:02.564839   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.564850   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:02.564856   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:02.564907   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:02.598322   84547 cri.go:89] found id: ""
	I1210 00:03:02.598346   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.598354   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:02.598359   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:02.598417   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:02.632256   84547 cri.go:89] found id: ""
	I1210 00:03:02.632310   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.632322   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:02.632334   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:02.632348   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:02.686025   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:02.686063   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:02.701214   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:02.701237   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:02.773453   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:02.773477   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:02.773491   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:02.862017   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:02.862068   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:05.401014   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:05.415009   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:05.415072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:05.454916   84547 cri.go:89] found id: ""
	I1210 00:03:05.454945   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.454955   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:05.454962   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:05.455023   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:05.492955   84547 cri.go:89] found id: ""
	I1210 00:03:05.492981   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.492988   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:05.492995   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:05.493046   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:02.698652   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.196840   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:03.755858   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.757395   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.489875   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:07.990053   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.539646   84547 cri.go:89] found id: ""
	I1210 00:03:05.539670   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.539683   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:05.539690   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:05.539755   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:05.574534   84547 cri.go:89] found id: ""
	I1210 00:03:05.574559   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.574567   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:05.574572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:05.574632   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:05.608141   84547 cri.go:89] found id: ""
	I1210 00:03:05.608166   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.608174   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:05.608180   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:05.608243   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:05.643725   84547 cri.go:89] found id: ""
	I1210 00:03:05.643751   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.643759   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:05.643765   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:05.643812   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:05.677730   84547 cri.go:89] found id: ""
	I1210 00:03:05.677760   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.677772   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:05.677779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:05.677846   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:05.712475   84547 cri.go:89] found id: ""
	I1210 00:03:05.712507   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.712519   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:05.712531   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:05.712548   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:05.764532   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:05.764569   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:05.777910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:05.777938   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:05.851070   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:05.851099   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:05.851114   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:05.929518   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:05.929554   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:08.465166   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:08.478666   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:08.478723   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:08.513763   84547 cri.go:89] found id: ""
	I1210 00:03:08.513795   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.513808   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:08.513816   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:08.513877   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:08.548272   84547 cri.go:89] found id: ""
	I1210 00:03:08.548299   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.548306   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:08.548313   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:08.548397   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:08.581773   84547 cri.go:89] found id: ""
	I1210 00:03:08.581801   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.581812   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:08.581820   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:08.581884   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:08.613625   84547 cri.go:89] found id: ""
	I1210 00:03:08.613655   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.613664   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:08.613669   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:08.613716   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:08.644618   84547 cri.go:89] found id: ""
	I1210 00:03:08.644644   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.644652   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:08.644658   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:08.644717   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:08.676840   84547 cri.go:89] found id: ""
	I1210 00:03:08.676867   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.676879   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:08.676887   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:08.676952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:08.709918   84547 cri.go:89] found id: ""
	I1210 00:03:08.709952   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.709961   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:08.709969   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:08.710029   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:08.740842   84547 cri.go:89] found id: ""
	I1210 00:03:08.740871   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.740882   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:08.740893   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:08.740907   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:08.808679   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:08.808705   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:08.808721   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:08.884692   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:08.884733   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:08.920922   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:08.920952   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:08.971404   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:08.971441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:07.197548   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:09.197749   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:11.198894   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:08.258477   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:10.755482   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:10.490495   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:12.990709   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:11.484905   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:11.498791   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:11.498865   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:11.535784   84547 cri.go:89] found id: ""
	I1210 00:03:11.535809   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.535820   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:11.535827   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:11.535890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:11.567981   84547 cri.go:89] found id: ""
	I1210 00:03:11.568006   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.568019   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:11.568026   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:11.568083   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:11.601188   84547 cri.go:89] found id: ""
	I1210 00:03:11.601213   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.601224   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:11.601231   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:11.601289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:11.635759   84547 cri.go:89] found id: ""
	I1210 00:03:11.635787   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.635798   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:11.635807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:11.635867   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:11.675512   84547 cri.go:89] found id: ""
	I1210 00:03:11.675537   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.675547   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:11.675554   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:11.675638   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:11.709889   84547 cri.go:89] found id: ""
	I1210 00:03:11.709912   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.709922   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:11.709929   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:11.709987   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:11.744571   84547 cri.go:89] found id: ""
	I1210 00:03:11.744603   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.744610   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:11.744616   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:11.744677   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:11.779091   84547 cri.go:89] found id: ""
	I1210 00:03:11.779123   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.779136   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:11.779146   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:11.779156   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:11.830682   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:11.830726   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:11.844352   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:11.844383   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:11.915025   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:11.915046   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:11.915058   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:11.998581   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:11.998620   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:14.536408   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:14.550250   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:14.550321   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:14.586005   84547 cri.go:89] found id: ""
	I1210 00:03:14.586037   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.586049   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:14.586057   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:14.586118   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:14.620124   84547 cri.go:89] found id: ""
	I1210 00:03:14.620159   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.620170   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:14.620178   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:14.620231   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:14.655079   84547 cri.go:89] found id: ""
	I1210 00:03:14.655104   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.655112   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:14.655117   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:14.655178   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:14.688253   84547 cri.go:89] found id: ""
	I1210 00:03:14.688285   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.688298   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:14.688308   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:14.688371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:14.726904   84547 cri.go:89] found id: ""
	I1210 00:03:14.726932   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.726940   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:14.726945   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:14.726994   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:14.764836   84547 cri.go:89] found id: ""
	I1210 00:03:14.764868   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.764881   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:14.764890   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:14.764955   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:14.803557   84547 cri.go:89] found id: ""
	I1210 00:03:14.803605   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.803616   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:14.803621   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:14.803674   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:14.841070   84547 cri.go:89] found id: ""
	I1210 00:03:14.841102   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.841122   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:14.841137   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:14.841161   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:14.907607   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:14.907631   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:14.907644   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:14.985179   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:14.985209   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:15.022654   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:15.022687   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:15.075224   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:15.075260   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:13.697072   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:15.697892   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:12.757232   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:14.758529   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:17.256541   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:15.490871   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:17.990140   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:17.589836   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:17.604045   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:17.604102   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:17.643211   84547 cri.go:89] found id: ""
	I1210 00:03:17.643241   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.643251   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:17.643260   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:17.643320   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:17.675436   84547 cri.go:89] found id: ""
	I1210 00:03:17.675462   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.675472   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:17.675479   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:17.675538   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:17.709428   84547 cri.go:89] found id: ""
	I1210 00:03:17.709465   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.709476   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:17.709484   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:17.709544   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:17.764088   84547 cri.go:89] found id: ""
	I1210 00:03:17.764121   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.764132   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:17.764139   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:17.764200   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:17.797908   84547 cri.go:89] found id: ""
	I1210 00:03:17.797933   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.797944   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:17.797953   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:17.798016   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:17.829580   84547 cri.go:89] found id: ""
	I1210 00:03:17.829609   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.829620   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:17.829628   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:17.829687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:17.865106   84547 cri.go:89] found id: ""
	I1210 00:03:17.865136   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.865145   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:17.865150   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:17.865200   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:17.896757   84547 cri.go:89] found id: ""
	I1210 00:03:17.896791   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.896803   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:17.896814   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:17.896830   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:17.977210   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:17.977244   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:18.016843   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:18.016867   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:18.065263   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:18.065294   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:18.078188   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:18.078212   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:18.148164   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:17.698675   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:20.199732   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:19.755949   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:21.756959   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:19.990714   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:21.990766   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:23.991443   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:23.991468   83900 pod_ready.go:82] duration metric: took 4m0.007840011s for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	E1210 00:03:23.991477   83900 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1210 00:03:23.991484   83900 pod_ready.go:39] duration metric: took 4m6.196076299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:03:23.991501   83900 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:03:23.991534   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:23.991610   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:24.032198   83900 cri.go:89] found id: "07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:24.032227   83900 cri.go:89] found id: ""
	I1210 00:03:24.032237   83900 logs.go:282] 1 containers: [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7]
	I1210 00:03:24.032303   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.037671   83900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:24.037746   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:24.076467   83900 cri.go:89] found id: "c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:24.076499   83900 cri.go:89] found id: ""
	I1210 00:03:24.076507   83900 logs.go:282] 1 containers: [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9]
	I1210 00:03:24.076557   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.081125   83900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:24.081193   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:24.125433   83900 cri.go:89] found id: "db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:24.125472   83900 cri.go:89] found id: ""
	I1210 00:03:24.125483   83900 logs.go:282] 1 containers: [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71]
	I1210 00:03:24.125542   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.131023   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:24.131097   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:24.173215   83900 cri.go:89] found id: "f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:24.173239   83900 cri.go:89] found id: ""
	I1210 00:03:24.173247   83900 logs.go:282] 1 containers: [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de]
	I1210 00:03:24.173303   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.179027   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:24.179108   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:24.228905   83900 cri.go:89] found id: "a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:24.228936   83900 cri.go:89] found id: ""
	I1210 00:03:24.228946   83900 logs.go:282] 1 containers: [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994]
	I1210 00:03:24.229004   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.234441   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:24.234520   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:20.648921   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:20.661458   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:20.661516   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:20.693473   84547 cri.go:89] found id: ""
	I1210 00:03:20.693509   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.693518   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:20.693524   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:20.693576   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:20.726279   84547 cri.go:89] found id: ""
	I1210 00:03:20.726301   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.726309   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:20.726314   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:20.726375   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:20.759895   84547 cri.go:89] found id: ""
	I1210 00:03:20.759922   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.759931   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:20.759936   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:20.759988   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:20.793934   84547 cri.go:89] found id: ""
	I1210 00:03:20.793964   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.793974   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:20.793982   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:20.794049   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:20.828040   84547 cri.go:89] found id: ""
	I1210 00:03:20.828066   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.828077   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:20.828084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:20.828143   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:20.861917   84547 cri.go:89] found id: ""
	I1210 00:03:20.861950   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.861960   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:20.861967   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:20.862028   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:20.896458   84547 cri.go:89] found id: ""
	I1210 00:03:20.896481   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.896489   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:20.896494   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:20.896551   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:20.931014   84547 cri.go:89] found id: ""
	I1210 00:03:20.931044   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.931052   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:20.931061   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:20.931072   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:20.968693   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:20.968718   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:21.021880   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:21.021917   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:21.035848   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:21.035881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:21.104570   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:21.104604   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:21.104621   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:23.679447   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:23.692326   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:23.692426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:23.729773   84547 cri.go:89] found id: ""
	I1210 00:03:23.729796   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.729804   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:23.729809   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:23.729855   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:23.765875   84547 cri.go:89] found id: ""
	I1210 00:03:23.765905   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.765915   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:23.765922   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:23.765984   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:23.800785   84547 cri.go:89] found id: ""
	I1210 00:03:23.800821   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.800831   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:23.800838   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:23.800902   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:23.838137   84547 cri.go:89] found id: ""
	I1210 00:03:23.838160   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.838168   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:23.838173   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:23.838222   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:23.869916   84547 cri.go:89] found id: ""
	I1210 00:03:23.869947   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.869958   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:23.869966   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:23.870027   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:23.903939   84547 cri.go:89] found id: ""
	I1210 00:03:23.903962   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.903969   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:23.903975   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:23.904021   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:23.937092   84547 cri.go:89] found id: ""
	I1210 00:03:23.937119   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.937127   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:23.937133   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:23.937194   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:23.970562   84547 cri.go:89] found id: ""
	I1210 00:03:23.970599   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.970611   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:23.970622   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:23.970641   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:23.983364   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:23.983394   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:24.066624   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:24.066645   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:24.066657   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:24.151466   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:24.151502   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:24.204590   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:24.204615   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:22.698354   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:24.698462   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:24.256922   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:26.755965   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:24.279840   83900 cri.go:89] found id: "a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:24.279859   83900 cri.go:89] found id: ""
	I1210 00:03:24.279866   83900 logs.go:282] 1 containers: [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538]
	I1210 00:03:24.279922   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.283898   83900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:24.283972   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:24.319541   83900 cri.go:89] found id: ""
	I1210 00:03:24.319598   83900 logs.go:282] 0 containers: []
	W1210 00:03:24.319612   83900 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:24.319624   83900 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:03:24.319686   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:03:24.356205   83900 cri.go:89] found id: "b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:24.356228   83900 cri.go:89] found id: "e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:24.356234   83900 cri.go:89] found id: ""
	I1210 00:03:24.356242   83900 logs.go:282] 2 containers: [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1]
	I1210 00:03:24.356302   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.361772   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.366656   83900 logs.go:123] Gathering logs for kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] ...
	I1210 00:03:24.366690   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:24.408922   83900 logs.go:123] Gathering logs for kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] ...
	I1210 00:03:24.408955   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:24.466720   83900 logs.go:123] Gathering logs for storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] ...
	I1210 00:03:24.466762   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:24.500254   83900 logs.go:123] Gathering logs for storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] ...
	I1210 00:03:24.500290   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:24.535025   83900 logs.go:123] Gathering logs for container status ...
	I1210 00:03:24.535058   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:24.575706   83900 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:24.575732   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:24.645227   83900 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:24.645265   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:24.658833   83900 logs.go:123] Gathering logs for kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] ...
	I1210 00:03:24.658878   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:24.702076   83900 logs.go:123] Gathering logs for kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] ...
	I1210 00:03:24.702107   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:24.738869   83900 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:24.738900   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:25.204715   83900 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:25.204753   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:03:25.320267   83900 logs.go:123] Gathering logs for etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] ...
	I1210 00:03:25.320315   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:25.370599   83900 logs.go:123] Gathering logs for coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] ...
	I1210 00:03:25.370635   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:27.910863   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:27.925070   83900 api_server.go:72] duration metric: took 4m17.856306845s to wait for apiserver process to appear ...
	I1210 00:03:27.925098   83900 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:03:27.925164   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:27.925227   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:27.959991   83900 cri.go:89] found id: "07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:27.960017   83900 cri.go:89] found id: ""
	I1210 00:03:27.960026   83900 logs.go:282] 1 containers: [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7]
	I1210 00:03:27.960081   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:27.964069   83900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:27.964128   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:27.997618   83900 cri.go:89] found id: "c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:27.997640   83900 cri.go:89] found id: ""
	I1210 00:03:27.997650   83900 logs.go:282] 1 containers: [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9]
	I1210 00:03:27.997710   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.001497   83900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:28.001570   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:28.034858   83900 cri.go:89] found id: "db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:28.034883   83900 cri.go:89] found id: ""
	I1210 00:03:28.034891   83900 logs.go:282] 1 containers: [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71]
	I1210 00:03:28.034953   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.038775   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:28.038852   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:28.074473   83900 cri.go:89] found id: "f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:28.074497   83900 cri.go:89] found id: ""
	I1210 00:03:28.074506   83900 logs.go:282] 1 containers: [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de]
	I1210 00:03:28.074568   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.079082   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:28.079149   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:28.113948   83900 cri.go:89] found id: "a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:28.113971   83900 cri.go:89] found id: ""
	I1210 00:03:28.113981   83900 logs.go:282] 1 containers: [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994]
	I1210 00:03:28.114045   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.118930   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:28.118982   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:28.162794   83900 cri.go:89] found id: "a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:28.162820   83900 cri.go:89] found id: ""
	I1210 00:03:28.162827   83900 logs.go:282] 1 containers: [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538]
	I1210 00:03:28.162872   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.166637   83900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:28.166715   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:28.206591   83900 cri.go:89] found id: ""
	I1210 00:03:28.206619   83900 logs.go:282] 0 containers: []
	W1210 00:03:28.206627   83900 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:28.206633   83900 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:03:28.206693   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:03:28.251722   83900 cri.go:89] found id: "b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:28.251748   83900 cri.go:89] found id: "e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:28.251754   83900 cri.go:89] found id: ""
	I1210 00:03:28.251763   83900 logs.go:282] 2 containers: [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1]
	I1210 00:03:28.251821   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.256785   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.260355   83900 logs.go:123] Gathering logs for storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] ...
	I1210 00:03:28.260378   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:28.295801   83900 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:28.295827   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:28.734920   83900 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:28.734963   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:28.804266   83900 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:28.804305   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:28.818223   83900 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:28.818251   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:03:28.931008   83900 logs.go:123] Gathering logs for etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] ...
	I1210 00:03:28.931044   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:28.977544   83900 logs.go:123] Gathering logs for coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] ...
	I1210 00:03:28.977592   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:29.016645   83900 logs.go:123] Gathering logs for kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] ...
	I1210 00:03:29.016679   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:29.052920   83900 logs.go:123] Gathering logs for kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] ...
	I1210 00:03:29.052945   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:29.094464   83900 logs.go:123] Gathering logs for kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] ...
	I1210 00:03:29.094497   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:29.132520   83900 logs.go:123] Gathering logs for kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] ...
	I1210 00:03:29.132545   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:29.180364   83900 logs.go:123] Gathering logs for storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] ...
	I1210 00:03:29.180396   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:29.216627   83900 logs.go:123] Gathering logs for container status ...
	I1210 00:03:29.216657   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:26.774169   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:26.786888   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:26.786964   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:26.820391   84547 cri.go:89] found id: ""
	I1210 00:03:26.820423   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.820433   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:26.820441   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:26.820504   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:26.853927   84547 cri.go:89] found id: ""
	I1210 00:03:26.853955   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.853963   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:26.853971   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:26.854031   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:26.887309   84547 cri.go:89] found id: ""
	I1210 00:03:26.887337   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.887347   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:26.887353   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:26.887415   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:26.923869   84547 cri.go:89] found id: ""
	I1210 00:03:26.923897   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.923908   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:26.923915   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:26.923983   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:26.959919   84547 cri.go:89] found id: ""
	I1210 00:03:26.959952   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.959964   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:26.959971   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:26.960032   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:26.993742   84547 cri.go:89] found id: ""
	I1210 00:03:26.993775   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.993787   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:26.993794   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:26.993853   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:27.026977   84547 cri.go:89] found id: ""
	I1210 00:03:27.027011   84547 logs.go:282] 0 containers: []
	W1210 00:03:27.027021   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:27.027028   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:27.027090   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:27.064135   84547 cri.go:89] found id: ""
	I1210 00:03:27.064164   84547 logs.go:282] 0 containers: []
	W1210 00:03:27.064173   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:27.064181   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:27.064192   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:27.101758   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:27.101784   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:27.155536   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:27.155592   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:27.168891   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:27.168914   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:27.242486   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:27.242511   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:27.242525   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:29.821740   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:29.834536   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:29.834604   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:29.868316   84547 cri.go:89] found id: ""
	I1210 00:03:29.868341   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.868348   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:29.868354   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:29.868402   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:29.902290   84547 cri.go:89] found id: ""
	I1210 00:03:29.902320   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.902331   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:29.902338   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:29.902399   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:29.948750   84547 cri.go:89] found id: ""
	I1210 00:03:29.948780   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.948792   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:29.948800   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:29.948864   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:29.994587   84547 cri.go:89] found id: ""
	I1210 00:03:29.994618   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.994629   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:29.994636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:29.994694   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:30.042216   84547 cri.go:89] found id: ""
	I1210 00:03:30.042243   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.042256   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:30.042264   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:30.042345   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:30.075964   84547 cri.go:89] found id: ""
	I1210 00:03:30.075989   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.075999   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:30.076007   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:30.076072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:30.108647   84547 cri.go:89] found id: ""
	I1210 00:03:30.108676   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.108687   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:30.108695   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:30.108760   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:30.140983   84547 cri.go:89] found id: ""
	I1210 00:03:30.141013   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.141022   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:30.141030   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:30.141040   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:30.198281   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:30.198319   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:30.213820   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:30.213849   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:30.283907   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:30.283927   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:30.283940   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:30.360731   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:30.360768   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:27.197512   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:29.698238   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:31.698679   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:28.757444   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:31.255481   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:31.759061   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1210 00:03:31.763064   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 200:
	ok
	I1210 00:03:31.763974   83900 api_server.go:141] control plane version: v1.31.2
	I1210 00:03:31.763992   83900 api_server.go:131] duration metric: took 3.83888731s to wait for apiserver health ...
	I1210 00:03:31.763999   83900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:03:31.764021   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:31.764077   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:31.798282   83900 cri.go:89] found id: "07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:31.798312   83900 cri.go:89] found id: ""
	I1210 00:03:31.798320   83900 logs.go:282] 1 containers: [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7]
	I1210 00:03:31.798375   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.802661   83900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:31.802726   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:31.837861   83900 cri.go:89] found id: "c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:31.837888   83900 cri.go:89] found id: ""
	I1210 00:03:31.837896   83900 logs.go:282] 1 containers: [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9]
	I1210 00:03:31.837943   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.842139   83900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:31.842224   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:31.876940   83900 cri.go:89] found id: "db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:31.876967   83900 cri.go:89] found id: ""
	I1210 00:03:31.876977   83900 logs.go:282] 1 containers: [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71]
	I1210 00:03:31.877043   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.881416   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:31.881491   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:31.915449   83900 cri.go:89] found id: "f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:31.915481   83900 cri.go:89] found id: ""
	I1210 00:03:31.915491   83900 logs.go:282] 1 containers: [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de]
	I1210 00:03:31.915575   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.919342   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:31.919400   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:31.954554   83900 cri.go:89] found id: "a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:31.954583   83900 cri.go:89] found id: ""
	I1210 00:03:31.954592   83900 logs.go:282] 1 containers: [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994]
	I1210 00:03:31.954641   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.958484   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:31.958569   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:32.000361   83900 cri.go:89] found id: "a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:32.000390   83900 cri.go:89] found id: ""
	I1210 00:03:32.000401   83900 logs.go:282] 1 containers: [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538]
	I1210 00:03:32.000462   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:32.004339   83900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:32.004400   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:32.043231   83900 cri.go:89] found id: ""
	I1210 00:03:32.043254   83900 logs.go:282] 0 containers: []
	W1210 00:03:32.043279   83900 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:32.043285   83900 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:03:32.043334   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:03:32.079572   83900 cri.go:89] found id: "b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:32.079597   83900 cri.go:89] found id: "e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:32.079608   83900 cri.go:89] found id: ""
	I1210 00:03:32.079615   83900 logs.go:282] 2 containers: [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1]
	I1210 00:03:32.079661   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:32.083526   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:32.087101   83900 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:32.087126   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:32.494977   83900 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:32.495021   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:32.509299   83900 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:32.509342   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:03:32.612029   83900 logs.go:123] Gathering logs for kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] ...
	I1210 00:03:32.612066   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:32.656868   83900 logs.go:123] Gathering logs for kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] ...
	I1210 00:03:32.656900   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:32.695347   83900 logs.go:123] Gathering logs for storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] ...
	I1210 00:03:32.695376   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:32.732071   83900 logs.go:123] Gathering logs for storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] ...
	I1210 00:03:32.732100   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:32.767692   83900 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:32.767718   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:32.836047   83900 logs.go:123] Gathering logs for etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] ...
	I1210 00:03:32.836088   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:32.887136   83900 logs.go:123] Gathering logs for coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] ...
	I1210 00:03:32.887175   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:32.929836   83900 logs.go:123] Gathering logs for kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] ...
	I1210 00:03:32.929873   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:32.972459   83900 logs.go:123] Gathering logs for kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] ...
	I1210 00:03:32.972492   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:33.029387   83900 logs.go:123] Gathering logs for container status ...
	I1210 00:03:33.029415   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:32.904302   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:32.919123   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:32.919209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:32.952847   84547 cri.go:89] found id: ""
	I1210 00:03:32.952879   84547 logs.go:282] 0 containers: []
	W1210 00:03:32.952889   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:32.952897   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:32.952961   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:32.986993   84547 cri.go:89] found id: ""
	I1210 00:03:32.987020   84547 logs.go:282] 0 containers: []
	W1210 00:03:32.987029   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:32.987035   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:32.987085   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:33.027506   84547 cri.go:89] found id: ""
	I1210 00:03:33.027536   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.027548   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:33.027556   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:33.027630   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:33.061553   84547 cri.go:89] found id: ""
	I1210 00:03:33.061588   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.061605   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:33.061613   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:33.061673   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:33.097640   84547 cri.go:89] found id: ""
	I1210 00:03:33.097679   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.097693   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:33.097702   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:33.097783   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:33.131725   84547 cri.go:89] found id: ""
	I1210 00:03:33.131758   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.131768   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:33.131775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:33.131839   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:33.165731   84547 cri.go:89] found id: ""
	I1210 00:03:33.165759   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.165771   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:33.165779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:33.165841   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:33.200288   84547 cri.go:89] found id: ""
	I1210 00:03:33.200310   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.200320   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:33.200338   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:33.200351   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:33.251524   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:33.251577   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:33.266197   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:33.266223   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:33.343529   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:33.343583   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:33.343600   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:33.420133   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:33.420175   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:35.577482   83900 system_pods.go:59] 8 kube-system pods found
	I1210 00:03:35.577511   83900 system_pods.go:61] "coredns-7c65d6cfc9-qvtlr" [1ef5707d-5f06-46d0-809b-da79f79c49e5] Running
	I1210 00:03:35.577516   83900 system_pods.go:61] "etcd-embed-certs-825613" [3213929f-0200-4e7d-9b69-967463bfc194] Running
	I1210 00:03:35.577520   83900 system_pods.go:61] "kube-apiserver-embed-certs-825613" [d64b8c55-7da1-4b8e-864e-0727676b17a1] Running
	I1210 00:03:35.577524   83900 system_pods.go:61] "kube-controller-manager-embed-certs-825613" [61372146-3fe1-4f4e-9d8e-d85cc5ac8369] Running
	I1210 00:03:35.577527   83900 system_pods.go:61] "kube-proxy-rn6fg" [6db02558-bfa6-4c5f-a120-aed13575b273] Running
	I1210 00:03:35.577529   83900 system_pods.go:61] "kube-scheduler-embed-certs-825613" [b9c75ecd-add3-4a19-86e0-08326eea0f6b] Running
	I1210 00:03:35.577537   83900 system_pods.go:61] "metrics-server-6867b74b74-hg7c5" [2a657b1b-4435-42b5-aef2-deebf7865c83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:03:35.577542   83900 system_pods.go:61] "storage-provisioner" [e5cabae9-bb71-4b5e-9a43-0dce4a0733a1] Running
	I1210 00:03:35.577549   83900 system_pods.go:74] duration metric: took 3.81354476s to wait for pod list to return data ...
	I1210 00:03:35.577556   83900 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:03:35.579951   83900 default_sa.go:45] found service account: "default"
	I1210 00:03:35.579971   83900 default_sa.go:55] duration metric: took 2.410307ms for default service account to be created ...
	I1210 00:03:35.579977   83900 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:03:35.584102   83900 system_pods.go:86] 8 kube-system pods found
	I1210 00:03:35.584126   83900 system_pods.go:89] "coredns-7c65d6cfc9-qvtlr" [1ef5707d-5f06-46d0-809b-da79f79c49e5] Running
	I1210 00:03:35.584131   83900 system_pods.go:89] "etcd-embed-certs-825613" [3213929f-0200-4e7d-9b69-967463bfc194] Running
	I1210 00:03:35.584136   83900 system_pods.go:89] "kube-apiserver-embed-certs-825613" [d64b8c55-7da1-4b8e-864e-0727676b17a1] Running
	I1210 00:03:35.584142   83900 system_pods.go:89] "kube-controller-manager-embed-certs-825613" [61372146-3fe1-4f4e-9d8e-d85cc5ac8369] Running
	I1210 00:03:35.584148   83900 system_pods.go:89] "kube-proxy-rn6fg" [6db02558-bfa6-4c5f-a120-aed13575b273] Running
	I1210 00:03:35.584153   83900 system_pods.go:89] "kube-scheduler-embed-certs-825613" [b9c75ecd-add3-4a19-86e0-08326eea0f6b] Running
	I1210 00:03:35.584163   83900 system_pods.go:89] "metrics-server-6867b74b74-hg7c5" [2a657b1b-4435-42b5-aef2-deebf7865c83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:03:35.584168   83900 system_pods.go:89] "storage-provisioner" [e5cabae9-bb71-4b5e-9a43-0dce4a0733a1] Running
	I1210 00:03:35.584181   83900 system_pods.go:126] duration metric: took 4.196356ms to wait for k8s-apps to be running ...
	I1210 00:03:35.584192   83900 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:03:35.584235   83900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:03:35.598192   83900 system_svc.go:56] duration metric: took 13.993491ms WaitForService to wait for kubelet
	I1210 00:03:35.598218   83900 kubeadm.go:582] duration metric: took 4m25.529459505s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:03:35.598238   83900 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:03:35.600992   83900 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:03:35.601012   83900 node_conditions.go:123] node cpu capacity is 2
	I1210 00:03:35.601025   83900 node_conditions.go:105] duration metric: took 2.782415ms to run NodePressure ...
	I1210 00:03:35.601038   83900 start.go:241] waiting for startup goroutines ...
	I1210 00:03:35.601049   83900 start.go:246] waiting for cluster config update ...
	I1210 00:03:35.601063   83900 start.go:255] writing updated cluster config ...
	I1210 00:03:35.601360   83900 ssh_runner.go:195] Run: rm -f paused
	I1210 00:03:35.647487   83900 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:03:35.650508   83900 out.go:177] * Done! kubectl is now configured to use "embed-certs-825613" cluster and "default" namespace by default
	I1210 00:03:34.199682   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:36.696731   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:33.258255   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:35.756802   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:35.971411   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:35.988993   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:35.989059   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:36.022639   84547 cri.go:89] found id: ""
	I1210 00:03:36.022673   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.022684   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:36.022692   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:36.022758   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:36.055421   84547 cri.go:89] found id: ""
	I1210 00:03:36.055452   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.055461   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:36.055466   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:36.055514   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:36.087690   84547 cri.go:89] found id: ""
	I1210 00:03:36.087721   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.087731   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:36.087738   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:36.087802   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:36.125209   84547 cri.go:89] found id: ""
	I1210 00:03:36.125240   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.125249   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:36.125254   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:36.125304   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:36.157544   84547 cri.go:89] found id: ""
	I1210 00:03:36.157586   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.157599   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:36.157607   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:36.157676   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:36.193415   84547 cri.go:89] found id: ""
	I1210 00:03:36.193448   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.193459   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:36.193466   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:36.193525   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:36.228941   84547 cri.go:89] found id: ""
	I1210 00:03:36.228971   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.228982   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:36.228989   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:36.229052   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:36.264821   84547 cri.go:89] found id: ""
	I1210 00:03:36.264851   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.264862   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:36.264873   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:36.264887   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:36.314841   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:36.314881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:36.328664   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:36.328695   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:36.402929   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:36.402956   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:36.402970   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:36.480270   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:36.480306   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:39.022748   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:39.036323   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:39.036382   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:39.070582   84547 cri.go:89] found id: ""
	I1210 00:03:39.070608   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.070616   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:39.070622   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:39.070690   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:39.108783   84547 cri.go:89] found id: ""
	I1210 00:03:39.108814   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.108825   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:39.108832   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:39.108889   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:39.142300   84547 cri.go:89] found id: ""
	I1210 00:03:39.142330   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.142338   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:39.142344   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:39.142400   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:39.177859   84547 cri.go:89] found id: ""
	I1210 00:03:39.177891   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.177903   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:39.177911   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:39.177965   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:39.212109   84547 cri.go:89] found id: ""
	I1210 00:03:39.212140   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.212152   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:39.212160   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:39.212209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:39.245451   84547 cri.go:89] found id: ""
	I1210 00:03:39.245490   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.245501   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:39.245509   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:39.245569   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:39.278925   84547 cri.go:89] found id: ""
	I1210 00:03:39.278957   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.278967   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:39.278974   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:39.279063   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:39.312322   84547 cri.go:89] found id: ""
	I1210 00:03:39.312350   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.312358   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:39.312367   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:39.312377   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:39.362844   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:39.362882   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:39.375938   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:39.375967   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:39.443649   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:39.443676   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:39.443691   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:39.523722   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:39.523757   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:38.698105   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:41.198814   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:38.256567   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:40.257434   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:42.062317   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:42.075094   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:42.075157   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:42.106719   84547 cri.go:89] found id: ""
	I1210 00:03:42.106744   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.106751   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:42.106757   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:42.106805   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:42.141977   84547 cri.go:89] found id: ""
	I1210 00:03:42.142011   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.142022   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:42.142029   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:42.142089   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:42.175117   84547 cri.go:89] found id: ""
	I1210 00:03:42.175151   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.175164   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:42.175172   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:42.175243   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:42.209871   84547 cri.go:89] found id: ""
	I1210 00:03:42.209900   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.209911   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:42.209919   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:42.209982   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:42.252784   84547 cri.go:89] found id: ""
	I1210 00:03:42.252812   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.252822   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:42.252830   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:42.252891   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:42.285934   84547 cri.go:89] found id: ""
	I1210 00:03:42.285961   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.285973   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:42.285980   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:42.286040   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:42.327393   84547 cri.go:89] found id: ""
	I1210 00:03:42.327423   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.327433   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:42.327440   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:42.327498   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:42.362244   84547 cri.go:89] found id: ""
	I1210 00:03:42.362279   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.362289   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:42.362302   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:42.362317   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:42.441656   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:42.441691   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:42.479357   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:42.479397   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:42.533002   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:42.533038   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:42.546485   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:42.546514   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:42.616912   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:45.117156   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:45.131265   84547 kubeadm.go:597] duration metric: took 4m2.157458218s to restartPrimaryControlPlane
	W1210 00:03:45.131350   84547 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 00:03:45.131379   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:03:43.699026   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:46.197420   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:42.756686   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:44.251077   84259 pod_ready.go:82] duration metric: took 4m0.000996899s for pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace to be "Ready" ...
	E1210 00:03:44.251112   84259 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 00:03:44.251136   84259 pod_ready.go:39] duration metric: took 4m13.15350289s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:03:44.251170   84259 kubeadm.go:597] duration metric: took 4m23.227405415s to restartPrimaryControlPlane
	W1210 00:03:44.251238   84259 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 00:03:44.251368   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:03:46.778025   84547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.646620044s)
	I1210 00:03:46.778099   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:03:46.792384   84547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:03:46.802897   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:03:46.813393   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:03:46.813417   84547 kubeadm.go:157] found existing configuration files:
	
	I1210 00:03:46.813473   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:03:46.823016   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:03:46.823093   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:03:46.832912   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:03:46.841961   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:03:46.842019   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:03:46.851160   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:03:46.859879   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:03:46.859938   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:03:46.869192   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:03:46.878423   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:03:46.878487   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:03:46.888103   84547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:03:47.105146   84547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:03:48.698211   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:51.197463   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:53.198578   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:55.697251   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:57.698289   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:00.198291   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:02.696926   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:04.697431   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:06.698260   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:10.270952   84259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.019545181s)
	I1210 00:04:10.271025   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:04:10.292112   84259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:04:10.307538   84259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:04:10.325375   84259 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:04:10.325406   84259 kubeadm.go:157] found existing configuration files:
	
	I1210 00:04:10.325465   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 00:04:10.338892   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:04:10.338960   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:04:10.353888   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 00:04:10.364787   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:04:10.364852   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:04:10.374513   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 00:04:10.393430   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:04:10.393486   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:04:10.403621   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 00:04:10.412759   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:04:10.412824   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:04:10.422153   84259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:04:10.468789   84259 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 00:04:10.468852   84259 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:04:10.566712   84259 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:04:10.566840   84259 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:04:10.566939   84259 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 00:04:10.574253   84259 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:04:10.576376   84259 out.go:235]   - Generating certificates and keys ...
	I1210 00:04:10.576485   84259 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:04:10.576566   84259 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:04:10.576722   84259 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:04:10.576816   84259 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:04:10.576915   84259 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:04:10.576991   84259 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:04:10.577107   84259 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:04:10.577214   84259 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:04:10.577319   84259 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:04:10.577433   84259 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:04:10.577522   84259 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:04:10.577616   84259 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:04:10.870887   84259 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:04:10.977055   84259 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 00:04:11.160320   84259 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:04:11.207864   84259 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:04:11.315037   84259 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:04:11.315715   84259 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:04:11.319135   84259 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:04:09.197254   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:11.197831   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:11.321063   84259 out.go:235]   - Booting up control plane ...
	I1210 00:04:11.321193   84259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:04:11.321296   84259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:04:11.322134   84259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:04:11.341567   84259 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:04:11.347658   84259 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:04:11.347736   84259 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:04:11.492127   84259 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 00:04:11.492293   84259 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 00:04:11.994410   84259 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.01471ms
	I1210 00:04:11.994533   84259 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 00:04:13.697731   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:16.198771   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:16.998638   84259 kubeadm.go:310] [api-check] The API server is healthy after 5.002934845s
	I1210 00:04:17.009930   84259 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 00:04:17.025483   84259 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 00:04:17.059445   84259 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 00:04:17.059762   84259 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-871210 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 00:04:17.071665   84259 kubeadm.go:310] [bootstrap-token] Using token: 1x1r34.gs3p33sqgju9dylj
	I1210 00:04:17.073936   84259 out.go:235]   - Configuring RBAC rules ...
	I1210 00:04:17.074055   84259 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 00:04:17.079920   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 00:04:17.090408   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 00:04:17.094072   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 00:04:17.096951   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 00:04:17.099929   84259 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 00:04:17.404431   84259 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 00:04:17.837125   84259 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 00:04:18.404721   84259 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 00:04:18.405661   84259 kubeadm.go:310] 
	I1210 00:04:18.405757   84259 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 00:04:18.405770   84259 kubeadm.go:310] 
	I1210 00:04:18.405871   84259 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 00:04:18.405883   84259 kubeadm.go:310] 
	I1210 00:04:18.405916   84259 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 00:04:18.406012   84259 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 00:04:18.406101   84259 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 00:04:18.406111   84259 kubeadm.go:310] 
	I1210 00:04:18.406197   84259 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 00:04:18.406209   84259 kubeadm.go:310] 
	I1210 00:04:18.406318   84259 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 00:04:18.406329   84259 kubeadm.go:310] 
	I1210 00:04:18.406412   84259 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 00:04:18.406482   84259 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 00:04:18.406544   84259 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 00:04:18.406550   84259 kubeadm.go:310] 
	I1210 00:04:18.406643   84259 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 00:04:18.406787   84259 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 00:04:18.406802   84259 kubeadm.go:310] 
	I1210 00:04:18.406919   84259 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 1x1r34.gs3p33sqgju9dylj \
	I1210 00:04:18.407089   84259 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b \
	I1210 00:04:18.407117   84259 kubeadm.go:310] 	--control-plane 
	I1210 00:04:18.407123   84259 kubeadm.go:310] 
	I1210 00:04:18.407207   84259 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 00:04:18.407214   84259 kubeadm.go:310] 
	I1210 00:04:18.407300   84259 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 1x1r34.gs3p33sqgju9dylj \
	I1210 00:04:18.407433   84259 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b 
	I1210 00:04:18.408197   84259 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:04:18.408272   84259 cni.go:84] Creating CNI manager for ""
	I1210 00:04:18.408289   84259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:04:18.410841   84259 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:04:18.697285   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:20.698266   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:18.412209   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:04:18.422123   84259 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:04:18.440972   84259 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:04:18.441058   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:18.441138   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-871210 minikube.k8s.io/updated_at=2024_12_10T00_04_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=default-k8s-diff-port-871210 minikube.k8s.io/primary=true
	I1210 00:04:18.462185   84259 ops.go:34] apiserver oom_adj: -16
	I1210 00:04:18.625855   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:19.126760   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:19.626829   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:20.126663   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:20.626893   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:21.126851   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:21.625892   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:22.126904   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:22.626450   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:22.715909   84259 kubeadm.go:1113] duration metric: took 4.274924102s to wait for elevateKubeSystemPrivileges
	I1210 00:04:22.715958   84259 kubeadm.go:394] duration metric: took 5m1.740971462s to StartCluster
	I1210 00:04:22.715982   84259 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:04:22.716080   84259 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1210 00:04:22.718538   84259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:04:22.718890   84259 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:04:22.718958   84259 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:04:22.719059   84259 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-871210"
	I1210 00:04:22.719079   84259 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-871210"
	W1210 00:04:22.719091   84259 addons.go:243] addon storage-provisioner should already be in state true
	I1210 00:04:22.719100   84259 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-871210"
	I1210 00:04:22.719123   84259 host.go:66] Checking if "default-k8s-diff-port-871210" exists ...
	I1210 00:04:22.719120   84259 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-871210"
	I1210 00:04:22.719169   84259 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-871210"
	W1210 00:04:22.719184   84259 addons.go:243] addon metrics-server should already be in state true
	I1210 00:04:22.719216   84259 host.go:66] Checking if "default-k8s-diff-port-871210" exists ...
	I1210 00:04:22.719133   84259 config.go:182] Loaded profile config "default-k8s-diff-port-871210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:04:22.719134   84259 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-871210"
	I1210 00:04:22.719582   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.719631   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.719717   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.719728   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.719753   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.719763   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.720519   84259 out.go:177] * Verifying Kubernetes components...
	I1210 00:04:22.722140   84259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:04:22.735592   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44099
	I1210 00:04:22.735716   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36311
	I1210 00:04:22.736075   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.736100   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.736605   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.736626   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.736605   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.736642   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.736995   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.737007   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.737217   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.737595   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.737639   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.739278   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41255
	I1210 00:04:22.741315   84259 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-871210"
	W1210 00:04:22.741337   84259 addons.go:243] addon default-storageclass should already be in state true
	I1210 00:04:22.741367   84259 host.go:66] Checking if "default-k8s-diff-port-871210" exists ...
	I1210 00:04:22.741739   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.741778   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.742259   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.742829   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.742858   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.743182   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.743632   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.743663   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.754879   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1210 00:04:22.755310   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.756015   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.756032   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.756397   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.756576   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.758535   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1210 00:04:22.759514   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33065
	I1210 00:04:22.759891   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.759925   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39821
	I1210 00:04:22.760309   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.760330   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.760346   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.760435   84259 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 00:04:22.760689   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.760831   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.760845   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.760860   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.761166   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.761589   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.761627   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.761737   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 00:04:22.761754   84259 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 00:04:22.761774   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1210 00:04:22.762882   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1210 00:04:22.764440   84259 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:04:22.764810   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.765316   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1210 00:04:22.765339   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.765474   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1210 00:04:22.765675   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1210 00:04:22.765828   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1210 00:04:22.765894   84259 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:04:22.765912   84259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:04:22.765919   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1210 00:04:22.765934   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1210 00:04:22.768979   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.769446   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1210 00:04:22.769498   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.769626   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1210 00:04:22.769832   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1210 00:04:22.769956   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1210 00:04:22.770078   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1210 00:04:22.780995   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45841
	I1210 00:04:22.781451   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.782077   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.782098   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.782467   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.782705   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.784771   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1210 00:04:22.785015   84259 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:04:22.785030   84259 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:04:22.785047   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1210 00:04:22.788208   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.788659   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1210 00:04:22.788690   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.788771   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1210 00:04:22.789148   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1210 00:04:22.789275   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1210 00:04:22.789386   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1210 00:04:22.926076   84259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:04:22.945059   84259 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-871210" to be "Ready" ...
	I1210 00:04:22.970684   84259 node_ready.go:49] node "default-k8s-diff-port-871210" has status "Ready":"True"
	I1210 00:04:22.970727   84259 node_ready.go:38] duration metric: took 25.618738ms for node "default-k8s-diff-port-871210" to be "Ready" ...
	I1210 00:04:22.970740   84259 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:04:22.989661   84259 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:23.045411   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 00:04:23.045433   84259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 00:04:23.074907   84259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:04:23.076857   84259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:04:23.094809   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 00:04:23.094836   84259 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 00:04:23.125107   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:04:23.125136   84259 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 00:04:23.174286   84259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:04:23.554521   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554556   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.554568   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554589   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.554855   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.554871   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.554880   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554882   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.554888   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.554893   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.554891   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.554903   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554913   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.555087   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.555099   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.555283   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.555354   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.555379   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.579741   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.579775   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.580075   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.580091   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.580113   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.776674   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.776701   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.777020   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.777060   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.777068   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.777076   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.777089   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.777314   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.777330   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.777347   84259 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-871210"
	I1210 00:04:23.778992   84259 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 00:04:23.198263   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:25.198299   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:23.780354   84259 addons.go:510] duration metric: took 1.061403814s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 00:04:25.012490   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:27.495987   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:27.697345   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:29.698123   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:31.692183   83859 pod_ready.go:82] duration metric: took 4m0.000797786s for pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace to be "Ready" ...
	E1210 00:04:31.692211   83859 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 00:04:31.692228   83859 pod_ready.go:39] duration metric: took 4m11.541153015s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:04:31.692258   83859 kubeadm.go:597] duration metric: took 4m19.334550967s to restartPrimaryControlPlane
	W1210 00:04:31.692305   83859 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 00:04:31.692329   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:04:29.995452   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:31.997927   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:34.496689   84259 pod_ready.go:93] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.496716   84259 pod_ready.go:82] duration metric: took 11.507027662s for pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.496730   84259 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.501166   84259 pod_ready.go:93] pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.501188   84259 pod_ready.go:82] duration metric: took 4.45016ms for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.501198   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.506061   84259 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.506085   84259 pod_ready.go:82] duration metric: took 4.881919ms for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.506096   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.510401   84259 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.510422   84259 pod_ready.go:82] duration metric: took 4.320761ms for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.510432   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pj85d" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.514319   84259 pod_ready.go:93] pod "kube-proxy-pj85d" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.514340   84259 pod_ready.go:82] duration metric: took 3.902541ms for pod "kube-proxy-pj85d" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.514348   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.896369   84259 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.896393   84259 pod_ready.go:82] duration metric: took 382.038726ms for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.896401   84259 pod_ready.go:39] duration metric: took 11.925650242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:04:34.896415   84259 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:04:34.896466   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:04:34.910589   84259 api_server.go:72] duration metric: took 12.191654524s to wait for apiserver process to appear ...
	I1210 00:04:34.910617   84259 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:04:34.910639   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1210 00:04:34.916503   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I1210 00:04:34.917955   84259 api_server.go:141] control plane version: v1.31.2
	I1210 00:04:34.917979   84259 api_server.go:131] duration metric: took 7.355946ms to wait for apiserver health ...
	I1210 00:04:34.917987   84259 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:04:35.098220   84259 system_pods.go:59] 9 kube-system pods found
	I1210 00:04:35.098252   84259 system_pods.go:61] "coredns-7c65d6cfc9-7xpcc" [6cff7e56-1785-41c8-bd9c-db9d3f0bd05f] Running
	I1210 00:04:35.098259   84259 system_pods.go:61] "coredns-7c65d6cfc9-z2n25" [b6b81952-7281-4705-9536-06eb939a5807] Running
	I1210 00:04:35.098265   84259 system_pods.go:61] "etcd-default-k8s-diff-port-871210" [e9c747a1-72c0-4b30-b401-c39a41fe5eb5] Running
	I1210 00:04:35.098271   84259 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-871210" [1a27b619-b576-4a36-b88a-0c498ad38628] Running
	I1210 00:04:35.098276   84259 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-871210" [d22a69b7-3389-46a9-9ca5-a3101aa34e05] Running
	I1210 00:04:35.098280   84259 system_pods.go:61] "kube-proxy-pj85d" [d1b9b056-f4a3-419c-86fa-a94d88464f74] Running
	I1210 00:04:35.098285   84259 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-871210" [50a03a5c-d0e5-454a-9a7b-49963ff59d06] Running
	I1210 00:04:35.098294   84259 system_pods.go:61] "metrics-server-6867b74b74-7g2qm" [49ac129a-c85d-4af1-b3b2-06bc10bced77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:04:35.098298   84259 system_pods.go:61] "storage-provisioner" [ea716edd-4030-4ec3-b094-c3a50154b473] Running
	I1210 00:04:35.098309   84259 system_pods.go:74] duration metric: took 180.315426ms to wait for pod list to return data ...
	I1210 00:04:35.098322   84259 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:04:35.294342   84259 default_sa.go:45] found service account: "default"
	I1210 00:04:35.294367   84259 default_sa.go:55] duration metric: took 196.039183ms for default service account to be created ...
	I1210 00:04:35.294376   84259 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:04:35.497331   84259 system_pods.go:86] 9 kube-system pods found
	I1210 00:04:35.497364   84259 system_pods.go:89] "coredns-7c65d6cfc9-7xpcc" [6cff7e56-1785-41c8-bd9c-db9d3f0bd05f] Running
	I1210 00:04:35.497369   84259 system_pods.go:89] "coredns-7c65d6cfc9-z2n25" [b6b81952-7281-4705-9536-06eb939a5807] Running
	I1210 00:04:35.497373   84259 system_pods.go:89] "etcd-default-k8s-diff-port-871210" [e9c747a1-72c0-4b30-b401-c39a41fe5eb5] Running
	I1210 00:04:35.497377   84259 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-871210" [1a27b619-b576-4a36-b88a-0c498ad38628] Running
	I1210 00:04:35.497381   84259 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-871210" [d22a69b7-3389-46a9-9ca5-a3101aa34e05] Running
	I1210 00:04:35.497384   84259 system_pods.go:89] "kube-proxy-pj85d" [d1b9b056-f4a3-419c-86fa-a94d88464f74] Running
	I1210 00:04:35.497387   84259 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-871210" [50a03a5c-d0e5-454a-9a7b-49963ff59d06] Running
	I1210 00:04:35.497396   84259 system_pods.go:89] "metrics-server-6867b74b74-7g2qm" [49ac129a-c85d-4af1-b3b2-06bc10bced77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:04:35.497400   84259 system_pods.go:89] "storage-provisioner" [ea716edd-4030-4ec3-b094-c3a50154b473] Running
	I1210 00:04:35.497409   84259 system_pods.go:126] duration metric: took 203.02694ms to wait for k8s-apps to be running ...
	I1210 00:04:35.497416   84259 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:04:35.497456   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:04:35.512217   84259 system_svc.go:56] duration metric: took 14.792056ms WaitForService to wait for kubelet
	I1210 00:04:35.512248   84259 kubeadm.go:582] duration metric: took 12.793318604s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:04:35.512274   84259 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:04:35.695292   84259 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:04:35.695318   84259 node_conditions.go:123] node cpu capacity is 2
	I1210 00:04:35.695329   84259 node_conditions.go:105] duration metric: took 183.048181ms to run NodePressure ...
	I1210 00:04:35.695341   84259 start.go:241] waiting for startup goroutines ...
	I1210 00:04:35.695348   84259 start.go:246] waiting for cluster config update ...
	I1210 00:04:35.695361   84259 start.go:255] writing updated cluster config ...
	I1210 00:04:35.695666   84259 ssh_runner.go:195] Run: rm -f paused
	I1210 00:04:35.742539   84259 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:04:35.744394   84259 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-871210" cluster and "default" namespace by default
	I1210 00:04:57.851348   83859 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.159001958s)
	I1210 00:04:57.851413   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:04:57.866601   83859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:04:57.876643   83859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:04:57.886172   83859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:04:57.886200   83859 kubeadm.go:157] found existing configuration files:
	
	I1210 00:04:57.886252   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:04:57.895643   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:04:57.895722   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:04:57.905397   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:04:57.914236   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:04:57.914299   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:04:57.923225   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:04:57.932422   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:04:57.932476   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:04:57.942840   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:04:57.952087   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:04:57.952159   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:04:57.961371   83859 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:04:58.005314   83859 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 00:04:58.005444   83859 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:04:58.098287   83859 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:04:58.098431   83859 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:04:58.098591   83859 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 00:04:58.106525   83859 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:04:58.109142   83859 out.go:235]   - Generating certificates and keys ...
	I1210 00:04:58.109219   83859 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:04:58.109320   83859 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:04:58.109456   83859 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:04:58.109536   83859 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:04:58.109637   83859 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:04:58.109728   83859 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:04:58.109840   83859 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:04:58.109940   83859 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:04:58.110047   83859 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:04:58.110152   83859 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:04:58.110225   83859 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:04:58.110296   83859 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:04:58.357649   83859 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:04:58.505840   83859 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 00:04:58.890560   83859 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:04:58.965928   83859 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:04:59.341665   83859 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:04:59.342240   83859 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:04:59.344644   83859 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:04:59.346225   83859 out.go:235]   - Booting up control plane ...
	I1210 00:04:59.346353   83859 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:04:59.348071   83859 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:04:59.348893   83859 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:04:59.366824   83859 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:04:59.372906   83859 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:04:59.372962   83859 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:04:59.509554   83859 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 00:04:59.509695   83859 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 00:05:00.014472   83859 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.085309ms
	I1210 00:05:00.014639   83859 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 00:05:05.017162   83859 kubeadm.go:310] [api-check] The API server is healthy after 5.002677832s
	I1210 00:05:05.029078   83859 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 00:05:05.048524   83859 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 00:05:05.080458   83859 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 00:05:05.080735   83859 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-048296 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 00:05:05.091793   83859 kubeadm.go:310] [bootstrap-token] Using token: y5jzuu.af7syybsclcivlzq
	I1210 00:05:05.093253   83859 out.go:235]   - Configuring RBAC rules ...
	I1210 00:05:05.093401   83859 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 00:05:05.098842   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 00:05:05.106341   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 00:05:05.109935   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 00:05:05.114403   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 00:05:05.121436   83859 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 00:05:05.424690   83859 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 00:05:05.851096   83859 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 00:05:06.425724   83859 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 00:05:06.426713   83859 kubeadm.go:310] 
	I1210 00:05:06.426785   83859 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 00:05:06.426808   83859 kubeadm.go:310] 
	I1210 00:05:06.426904   83859 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 00:05:06.426936   83859 kubeadm.go:310] 
	I1210 00:05:06.426981   83859 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 00:05:06.427061   83859 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 00:05:06.427110   83859 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 00:05:06.427120   83859 kubeadm.go:310] 
	I1210 00:05:06.427199   83859 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 00:05:06.427210   83859 kubeadm.go:310] 
	I1210 00:05:06.427282   83859 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 00:05:06.427294   83859 kubeadm.go:310] 
	I1210 00:05:06.427381   83859 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 00:05:06.427486   83859 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 00:05:06.427623   83859 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 00:05:06.427635   83859 kubeadm.go:310] 
	I1210 00:05:06.427757   83859 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 00:05:06.427874   83859 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 00:05:06.427905   83859 kubeadm.go:310] 
	I1210 00:05:06.428032   83859 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y5jzuu.af7syybsclcivlzq \
	I1210 00:05:06.428167   83859 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b \
	I1210 00:05:06.428201   83859 kubeadm.go:310] 	--control-plane 
	I1210 00:05:06.428211   83859 kubeadm.go:310] 
	I1210 00:05:06.428322   83859 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 00:05:06.428332   83859 kubeadm.go:310] 
	I1210 00:05:06.428438   83859 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y5jzuu.af7syybsclcivlzq \
	I1210 00:05:06.428572   83859 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b 
	I1210 00:05:06.428746   83859 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:05:06.428832   83859 cni.go:84] Creating CNI manager for ""
	I1210 00:05:06.428849   83859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:05:06.431674   83859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:05:06.433006   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:05:06.444058   83859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:05:06.462707   83859 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:05:06.462838   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:06.462873   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-048296 minikube.k8s.io/updated_at=2024_12_10T00_05_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=no-preload-048296 minikube.k8s.io/primary=true
	I1210 00:05:06.493379   83859 ops.go:34] apiserver oom_adj: -16
	I1210 00:05:06.666080   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:07.166762   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:07.666408   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:08.166269   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:08.666734   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:09.166797   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:09.666522   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:10.166230   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:10.667061   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:10.774149   83859 kubeadm.go:1113] duration metric: took 4.311371383s to wait for elevateKubeSystemPrivileges
	I1210 00:05:10.774188   83859 kubeadm.go:394] duration metric: took 4m58.464009996s to StartCluster
	I1210 00:05:10.774211   83859 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:05:10.774290   83859 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1210 00:05:10.777711   83859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:05:10.778040   83859 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:05:10.778133   83859 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:05:10.778230   83859 addons.go:69] Setting storage-provisioner=true in profile "no-preload-048296"
	I1210 00:05:10.778247   83859 addons.go:234] Setting addon storage-provisioner=true in "no-preload-048296"
	W1210 00:05:10.778255   83859 addons.go:243] addon storage-provisioner should already be in state true
	I1210 00:05:10.778249   83859 addons.go:69] Setting default-storageclass=true in profile "no-preload-048296"
	I1210 00:05:10.778261   83859 addons.go:69] Setting metrics-server=true in profile "no-preload-048296"
	I1210 00:05:10.778276   83859 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-048296"
	I1210 00:05:10.778286   83859 host.go:66] Checking if "no-preload-048296" exists ...
	I1210 00:05:10.778299   83859 addons.go:234] Setting addon metrics-server=true in "no-preload-048296"
	I1210 00:05:10.778339   83859 config.go:182] Loaded profile config "no-preload-048296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1210 00:05:10.778394   83859 addons.go:243] addon metrics-server should already be in state true
	I1210 00:05:10.778446   83859 host.go:66] Checking if "no-preload-048296" exists ...
	I1210 00:05:10.778684   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.778716   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.778746   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.778721   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.778860   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.778897   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.779748   83859 out.go:177] * Verifying Kubernetes components...
	I1210 00:05:10.781467   83859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:05:10.795108   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34465
	I1210 00:05:10.795227   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I1210 00:05:10.795615   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.795709   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.796135   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.796160   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.796221   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.796240   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.796522   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.796539   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.797128   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.797162   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.797200   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.797167   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.797697   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44173
	I1210 00:05:10.798091   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.798587   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.798613   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.798982   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.799324   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.803238   83859 addons.go:234] Setting addon default-storageclass=true in "no-preload-048296"
	W1210 00:05:10.803262   83859 addons.go:243] addon default-storageclass should already be in state true
	I1210 00:05:10.803291   83859 host.go:66] Checking if "no-preload-048296" exists ...
	I1210 00:05:10.803683   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.803722   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.814000   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39657
	I1210 00:05:10.814046   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I1210 00:05:10.814470   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.814686   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.814992   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.815014   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.815427   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.815448   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.815515   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.815748   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.815974   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.816171   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.817989   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1210 00:05:10.818439   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1210 00:05:10.820342   83859 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 00:05:10.820360   83859 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:05:10.821535   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 00:05:10.821556   83859 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 00:05:10.821577   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1210 00:05:10.821652   83859 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:05:10.821673   83859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:05:10.821690   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1210 00:05:10.825763   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826205   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1210 00:05:10.826228   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42881
	I1210 00:05:10.826252   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826275   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1210 00:05:10.826294   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826327   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1210 00:05:10.826228   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826504   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1210 00:05:10.826563   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1210 00:05:10.826633   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.826711   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1210 00:05:10.826860   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1210 00:05:10.826864   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1210 00:05:10.827029   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1210 00:05:10.827152   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.827173   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.827241   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1210 00:05:10.827691   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.829421   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.829471   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.867902   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45189
	I1210 00:05:10.868566   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.869138   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.869169   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.869575   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.869782   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.871531   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1210 00:05:10.871792   83859 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:05:10.871810   83859 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:05:10.871832   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1210 00:05:10.874309   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.874822   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1210 00:05:10.874845   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.874999   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1210 00:05:10.875202   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1210 00:05:10.875354   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1210 00:05:10.875501   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1210 00:05:11.009426   83859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:05:11.030237   83859 node_ready.go:35] waiting up to 6m0s for node "no-preload-048296" to be "Ready" ...
	I1210 00:05:11.054344   83859 node_ready.go:49] node "no-preload-048296" has status "Ready":"True"
	I1210 00:05:11.054366   83859 node_ready.go:38] duration metric: took 24.096361ms for node "no-preload-048296" to be "Ready" ...
	I1210 00:05:11.054376   83859 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:05:11.071740   83859 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:11.123723   83859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:05:11.137744   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 00:05:11.137772   83859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 00:05:11.159680   83859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:05:11.179245   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 00:05:11.179267   83859 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 00:05:11.240553   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:05:11.240580   83859 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 00:05:11.311813   83859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:05:12.041427   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.041463   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.041509   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.041532   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.041886   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.041949   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.041954   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.041963   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.041967   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.041972   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.041932   83859 main.go:141] libmachine: (no-preload-048296) DBG | Closing plugin on server side
	I1210 00:05:12.041986   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.042087   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.042229   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.042245   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.042297   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.042319   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.092616   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.092639   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.092910   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.092923   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.535943   83859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.224081671s)
	I1210 00:05:12.536008   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.536020   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.536456   83859 main.go:141] libmachine: (no-preload-048296) DBG | Closing plugin on server side
	I1210 00:05:12.536459   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.536482   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.536491   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.536498   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.536728   83859 main.go:141] libmachine: (no-preload-048296) DBG | Closing plugin on server side
	I1210 00:05:12.536735   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.536750   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.536761   83859 addons.go:475] Verifying addon metrics-server=true in "no-preload-048296"
	I1210 00:05:12.538638   83859 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 00:05:12.539910   83859 addons.go:510] duration metric: took 1.761785575s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 00:05:13.078954   83859 pod_ready.go:103] pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:05:13.579737   83859 pod_ready.go:93] pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:13.579760   83859 pod_ready.go:82] duration metric: took 2.507994954s for pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:13.579770   83859 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8rxx7" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:14.586682   83859 pod_ready.go:93] pod "coredns-7c65d6cfc9-8rxx7" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:14.586704   83859 pod_ready.go:82] duration metric: took 1.006927767s for pod "coredns-7c65d6cfc9-8rxx7" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:14.586714   83859 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.092318   83859 pod_ready.go:93] pod "etcd-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.092345   83859 pod_ready.go:82] duration metric: took 505.624811ms for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.092356   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.096802   83859 pod_ready.go:93] pod "kube-apiserver-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.096828   83859 pod_ready.go:82] duration metric: took 4.463914ms for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.096839   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.100904   83859 pod_ready.go:93] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.100923   83859 pod_ready.go:82] duration metric: took 4.07832ms for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.100932   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qklxb" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.176527   83859 pod_ready.go:93] pod "kube-proxy-qklxb" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.176552   83859 pod_ready.go:82] duration metric: took 75.613294ms for pod "kube-proxy-qklxb" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.176562   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:17.182952   83859 pod_ready.go:103] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:05:18.181993   83859 pod_ready.go:93] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:18.182016   83859 pod_ready.go:82] duration metric: took 3.005447779s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:18.182024   83859 pod_ready.go:39] duration metric: took 7.127639413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:05:18.182038   83859 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:05:18.182084   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:05:18.196127   83859 api_server.go:72] duration metric: took 7.418043985s to wait for apiserver process to appear ...
	I1210 00:05:18.196155   83859 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:05:18.196176   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:05:18.200537   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 200:
	ok
	I1210 00:05:18.201476   83859 api_server.go:141] control plane version: v1.31.2
	I1210 00:05:18.201501   83859 api_server.go:131] duration metric: took 5.340199ms to wait for apiserver health ...
	I1210 00:05:18.201508   83859 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:05:18.208072   83859 system_pods.go:59] 9 kube-system pods found
	I1210 00:05:18.208105   83859 system_pods.go:61] "coredns-7c65d6cfc9-56djc" [2e25bad9-b88a-4ac8-a180-968bf6b057a2] Running
	I1210 00:05:18.208114   83859 system_pods.go:61] "coredns-7c65d6cfc9-8rxx7" [3fdfd41b-8ef7-41af-a703-ebecfe9ad319] Running
	I1210 00:05:18.208119   83859 system_pods.go:61] "etcd-no-preload-048296" [4447a7d6-df4b-4760-9352-4df0310017ee] Running
	I1210 00:05:18.208125   83859 system_pods.go:61] "kube-apiserver-no-preload-048296" [55a06182-1520-4497-8358-639438eeb297] Running
	I1210 00:05:18.208130   83859 system_pods.go:61] "kube-controller-manager-no-preload-048296" [f30bdb21-7742-46e7-90fe-403679f5e02a] Running
	I1210 00:05:18.208136   83859 system_pods.go:61] "kube-proxy-qklxb" [8bb029a1-abf9-4825-b9ec-0520a78cb3d8] Running
	I1210 00:05:18.208141   83859 system_pods.go:61] "kube-scheduler-no-preload-048296" [019d6d37-c586-4f12-8daf-b60752223cf1] Running
	I1210 00:05:18.208152   83859 system_pods.go:61] "metrics-server-6867b74b74-n2f8c" [8e9f56c9-fd67-4715-9148-1255be17f1fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:05:18.208163   83859 system_pods.go:61] "storage-provisioner" [55fe311f-4610-4805-9fb7-3f1cac7c96e6] Running
	I1210 00:05:18.208176   83859 system_pods.go:74] duration metric: took 6.661729ms to wait for pod list to return data ...
	I1210 00:05:18.208187   83859 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:05:18.375431   83859 default_sa.go:45] found service account: "default"
	I1210 00:05:18.375458   83859 default_sa.go:55] duration metric: took 167.260728ms for default service account to be created ...
	I1210 00:05:18.375467   83859 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:05:18.578148   83859 system_pods.go:86] 9 kube-system pods found
	I1210 00:05:18.578176   83859 system_pods.go:89] "coredns-7c65d6cfc9-56djc" [2e25bad9-b88a-4ac8-a180-968bf6b057a2] Running
	I1210 00:05:18.578183   83859 system_pods.go:89] "coredns-7c65d6cfc9-8rxx7" [3fdfd41b-8ef7-41af-a703-ebecfe9ad319] Running
	I1210 00:05:18.578187   83859 system_pods.go:89] "etcd-no-preload-048296" [4447a7d6-df4b-4760-9352-4df0310017ee] Running
	I1210 00:05:18.578191   83859 system_pods.go:89] "kube-apiserver-no-preload-048296" [55a06182-1520-4497-8358-639438eeb297] Running
	I1210 00:05:18.578195   83859 system_pods.go:89] "kube-controller-manager-no-preload-048296" [f30bdb21-7742-46e7-90fe-403679f5e02a] Running
	I1210 00:05:18.578198   83859 system_pods.go:89] "kube-proxy-qklxb" [8bb029a1-abf9-4825-b9ec-0520a78cb3d8] Running
	I1210 00:05:18.578203   83859 system_pods.go:89] "kube-scheduler-no-preload-048296" [019d6d37-c586-4f12-8daf-b60752223cf1] Running
	I1210 00:05:18.578209   83859 system_pods.go:89] "metrics-server-6867b74b74-n2f8c" [8e9f56c9-fd67-4715-9148-1255be17f1fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:05:18.578213   83859 system_pods.go:89] "storage-provisioner" [55fe311f-4610-4805-9fb7-3f1cac7c96e6] Running
	I1210 00:05:18.578223   83859 system_pods.go:126] duration metric: took 202.750724ms to wait for k8s-apps to be running ...
	I1210 00:05:18.578233   83859 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:05:18.578277   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:05:18.592635   83859 system_svc.go:56] duration metric: took 14.391582ms WaitForService to wait for kubelet
	I1210 00:05:18.592669   83859 kubeadm.go:582] duration metric: took 7.814589832s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:05:18.592690   83859 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:05:18.776611   83859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:05:18.776632   83859 node_conditions.go:123] node cpu capacity is 2
	I1210 00:05:18.776642   83859 node_conditions.go:105] duration metric: took 183.94737ms to run NodePressure ...
	I1210 00:05:18.776654   83859 start.go:241] waiting for startup goroutines ...
	I1210 00:05:18.776660   83859 start.go:246] waiting for cluster config update ...
	I1210 00:05:18.776672   83859 start.go:255] writing updated cluster config ...
	I1210 00:05:18.776944   83859 ssh_runner.go:195] Run: rm -f paused
	I1210 00:05:18.826550   83859 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:05:18.828405   83859 out.go:177] * Done! kubectl is now configured to use "no-preload-048296" cluster and "default" namespace by default
	I1210 00:05:43.088214   84547 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 00:05:43.088352   84547 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 00:05:43.089912   84547 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 00:05:43.089988   84547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:05:43.090054   84547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:05:43.090140   84547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:05:43.090225   84547 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 00:05:43.090302   84547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:05:43.092050   84547 out.go:235]   - Generating certificates and keys ...
	I1210 00:05:43.092141   84547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:05:43.092210   84547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:05:43.092305   84547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:05:43.092392   84547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:05:43.092493   84547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:05:43.092569   84547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:05:43.092680   84547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:05:43.092761   84547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:05:43.092865   84547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:05:43.092975   84547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:05:43.093045   84547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:05:43.093143   84547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:05:43.093188   84547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:05:43.093236   84547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:05:43.093317   84547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:05:43.093402   84547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:05:43.093561   84547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:05:43.093728   84547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:05:43.093785   84547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:05:43.093855   84547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:05:43.095285   84547 out.go:235]   - Booting up control plane ...
	I1210 00:05:43.095396   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:05:43.095469   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:05:43.095525   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:05:43.095630   84547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:05:43.095804   84547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 00:05:43.095873   84547 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 00:05:43.095960   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096155   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096237   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096414   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096486   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096679   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096741   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096904   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096969   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.097122   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.097129   84547 kubeadm.go:310] 
	I1210 00:05:43.097167   84547 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 00:05:43.097202   84547 kubeadm.go:310] 		timed out waiting for the condition
	I1210 00:05:43.097211   84547 kubeadm.go:310] 
	I1210 00:05:43.097251   84547 kubeadm.go:310] 	This error is likely caused by:
	I1210 00:05:43.097280   84547 kubeadm.go:310] 		- The kubelet is not running
	I1210 00:05:43.097373   84547 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 00:05:43.097379   84547 kubeadm.go:310] 
	I1210 00:05:43.097465   84547 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 00:05:43.097495   84547 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 00:05:43.097522   84547 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 00:05:43.097531   84547 kubeadm.go:310] 
	I1210 00:05:43.097619   84547 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 00:05:43.097688   84547 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 00:05:43.097694   84547 kubeadm.go:310] 
	I1210 00:05:43.097794   84547 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 00:05:43.097875   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 00:05:43.097941   84547 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 00:05:43.098038   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 00:05:43.098124   84547 kubeadm.go:310] 
	W1210 00:05:43.098179   84547 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 00:05:43.098224   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:05:48.532537   84547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.434287494s)
	I1210 00:05:48.532615   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:05:48.546394   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:05:48.555650   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:05:48.555669   84547 kubeadm.go:157] found existing configuration files:
	
	I1210 00:05:48.555721   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:05:48.565301   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:05:48.565368   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:05:48.575330   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:05:48.584238   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:05:48.584324   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:05:48.593395   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:05:48.602150   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:05:48.602209   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:05:48.611177   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:05:48.619837   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:05:48.619907   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:05:48.629126   84547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:05:48.827680   84547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:07:44.857816   84547 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 00:07:44.857930   84547 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 00:07:44.859490   84547 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 00:07:44.859542   84547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:07:44.859627   84547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:07:44.859758   84547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:07:44.859894   84547 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 00:07:44.859953   84547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:07:44.861538   84547 out.go:235]   - Generating certificates and keys ...
	I1210 00:07:44.861657   84547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:07:44.861736   84547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:07:44.861809   84547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:07:44.861861   84547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:07:44.861920   84547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:07:44.861969   84547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:07:44.862024   84547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:07:44.862077   84547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:07:44.862143   84547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:07:44.862212   84547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:07:44.862246   84547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:07:44.862293   84547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:07:44.862343   84547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:07:44.862408   84547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:07:44.862505   84547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:07:44.862615   84547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:07:44.862765   84547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:07:44.862897   84547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:07:44.862955   84547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:07:44.863059   84547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:07:44.865485   84547 out.go:235]   - Booting up control plane ...
	I1210 00:07:44.865595   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:07:44.865720   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:07:44.865815   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:07:44.865929   84547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:07:44.866136   84547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 00:07:44.866209   84547 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 00:07:44.866332   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866517   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.866586   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866752   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.866818   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866976   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867042   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.867210   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867323   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.867512   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867522   84547 kubeadm.go:310] 
	I1210 00:07:44.867611   84547 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 00:07:44.867671   84547 kubeadm.go:310] 		timed out waiting for the condition
	I1210 00:07:44.867691   84547 kubeadm.go:310] 
	I1210 00:07:44.867729   84547 kubeadm.go:310] 	This error is likely caused by:
	I1210 00:07:44.867766   84547 kubeadm.go:310] 		- The kubelet is not running
	I1210 00:07:44.867880   84547 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 00:07:44.867895   84547 kubeadm.go:310] 
	I1210 00:07:44.868055   84547 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 00:07:44.868105   84547 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 00:07:44.868138   84547 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 00:07:44.868146   84547 kubeadm.go:310] 
	I1210 00:07:44.868250   84547 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 00:07:44.868354   84547 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 00:07:44.868362   84547 kubeadm.go:310] 
	I1210 00:07:44.868464   84547 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 00:07:44.868540   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 00:07:44.868606   84547 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 00:07:44.868689   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 00:07:44.868741   84547 kubeadm.go:310] 
	I1210 00:07:44.868762   84547 kubeadm.go:394] duration metric: took 8m1.943355039s to StartCluster
	I1210 00:07:44.868799   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:07:44.868852   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:07:44.906641   84547 cri.go:89] found id: ""
	I1210 00:07:44.906667   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.906675   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:07:44.906681   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:07:44.906734   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:07:44.942832   84547 cri.go:89] found id: ""
	I1210 00:07:44.942863   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.942872   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:07:44.942881   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:07:44.942945   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:07:44.978010   84547 cri.go:89] found id: ""
	I1210 00:07:44.978034   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.978042   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:07:44.978047   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:07:44.978108   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:07:45.017066   84547 cri.go:89] found id: ""
	I1210 00:07:45.017089   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.017097   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:07:45.017110   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:07:45.017172   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:07:45.049757   84547 cri.go:89] found id: ""
	I1210 00:07:45.049778   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.049786   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:07:45.049791   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:07:45.049842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:07:45.082754   84547 cri.go:89] found id: ""
	I1210 00:07:45.082789   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.082800   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:07:45.082807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:07:45.082933   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:07:45.117188   84547 cri.go:89] found id: ""
	I1210 00:07:45.117219   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.117227   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:07:45.117233   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:07:45.117302   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:07:45.153747   84547 cri.go:89] found id: ""
	I1210 00:07:45.153776   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.153785   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:07:45.153795   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:07:45.153810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:07:45.207625   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:07:45.207662   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:07:45.220929   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:07:45.220957   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:07:45.291850   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:07:45.291883   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:07:45.291899   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:07:45.397592   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:07:45.397629   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 00:07:45.446755   84547 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1210 00:07:45.446816   84547 out.go:270] * 
	W1210 00:07:45.446898   84547 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 00:07:45.446921   84547 out.go:270] * 
	W1210 00:07:45.448145   84547 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 00:07:45.452118   84547 out.go:201] 
	W1210 00:07:45.453335   84547 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 00:07:45.453398   84547 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 00:07:45.453438   84547 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 00:07:45.455704   84547 out.go:201] 
	
	
	==> CRI-O <==
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.768186885Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789810768158562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12acc85e-263f-4ff4-ad65-33781db2c22a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.768699365Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a319d3f-0710-48fc-b8fc-063d500c0ba0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.768751248Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a319d3f-0710-48fc-b8fc-063d500c0ba0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.768780860Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0a319d3f-0710-48fc-b8fc-063d500c0ba0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.801495329Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7649fd0-b186-48a8-a580-cbfbee8f37df name=/runtime.v1.RuntimeService/Version
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.801592236Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7649fd0-b186-48a8-a580-cbfbee8f37df name=/runtime.v1.RuntimeService/Version
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.802821830Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3cbc4aa8-cc91-42ea-a9e6-42e0e3ce708c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.803349483Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789810803317350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3cbc4aa8-cc91-42ea-a9e6-42e0e3ce708c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.803845918Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec575e90-4e5e-4024-8b0d-e5caed849c72 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.803925437Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec575e90-4e5e-4024-8b0d-e5caed849c72 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.803973271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ec575e90-4e5e-4024-8b0d-e5caed849c72 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.835284082Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74dd29fe-a0da-49c3-b49d-8825a7122f06 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.835397927Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74dd29fe-a0da-49c3-b49d-8825a7122f06 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.836958338Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d5f9c06-1c50-4269-9b11-a83ec6baa43c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.837539475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789810837508294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d5f9c06-1c50-4269-9b11-a83ec6baa43c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.838418131Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b2aa005-d623-422e-8c04-45c30a981e5d name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.838486497Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b2aa005-d623-422e-8c04-45c30a981e5d name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.838520649Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2b2aa005-d623-422e-8c04-45c30a981e5d name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.867825980Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1eb7ba76-0cda-4d03-9ede-c215782fc8f3 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.867911999Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1eb7ba76-0cda-4d03-9ede-c215782fc8f3 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.868838127Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac282412-bf06-4e72-9a4f-27c3c78f5630 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.869208302Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789810869186738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac282412-bf06-4e72-9a4f-27c3c78f5630 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.869798426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f34a2bca-d3f9-4cba-b214-daa12f5dfffe name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.869869787Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f34a2bca-d3f9-4cba-b214-daa12f5dfffe name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:16:50 old-k8s-version-720064 crio[624]: time="2024-12-10 00:16:50.869905149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f34a2bca-d3f9-4cba-b214-daa12f5dfffe name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 23:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049452] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044986] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.967295] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.057304] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.622707] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.503870] systemd-fstab-generator[550]: Ignoring "noauto" option for root device
	[  +0.058578] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077964] systemd-fstab-generator[562]: Ignoring "noauto" option for root device
	[  +0.212967] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.149461] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.273531] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +6.290551] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.069158] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.965394] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[ +12.784108] kauditd_printk_skb: 46 callbacks suppressed
	[Dec10 00:03] systemd-fstab-generator[5026]: Ignoring "noauto" option for root device
	[Dec10 00:05] systemd-fstab-generator[5316]: Ignoring "noauto" option for root device
	[  +0.059046] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 00:16:51 up 17 min,  0 users,  load average: 0.04, 0.04, 0.02
	Linux old-k8s-version-720064 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 10 00:16:45 old-k8s-version-720064 kubelet[6494]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000c0bf80, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000ca19b0, 0x24, 0x1000000000060, 0x7f3ae1b23ec8, 0x118, ...)
	Dec 10 00:16:45 old-k8s-version-720064 kubelet[6494]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Dec 10 00:16:45 old-k8s-version-720064 kubelet[6494]: net/http.(*Transport).dial(0xc00059e780, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000ca19b0, 0x24, 0x0, 0x0, 0x0, ...)
	Dec 10 00:16:45 old-k8s-version-720064 kubelet[6494]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Dec 10 00:16:45 old-k8s-version-720064 kubelet[6494]: net/http.(*Transport).dialConn(0xc00059e780, 0x4f7fe00, 0xc000052030, 0x0, 0xc000d22e40, 0x5, 0xc000ca19b0, 0x24, 0x0, 0xc0000c65a0, ...)
	Dec 10 00:16:45 old-k8s-version-720064 kubelet[6494]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Dec 10 00:16:45 old-k8s-version-720064 kubelet[6494]: net/http.(*Transport).dialConnFor(0xc00059e780, 0xc000530000)
	Dec 10 00:16:45 old-k8s-version-720064 kubelet[6494]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Dec 10 00:16:45 old-k8s-version-720064 kubelet[6494]: created by net/http.(*Transport).queueForDial
	Dec 10 00:16:45 old-k8s-version-720064 kubelet[6494]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Dec 10 00:16:45 old-k8s-version-720064 kubelet[6494]: goroutine 166 [select]:
	Dec 10 00:16:45 old-k8s-version-720064 kubelet[6494]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000288300, 0xc000623900, 0xc000d232c0, 0xc000d23260)
	Dec 10 00:16:45 old-k8s-version-720064 kubelet[6494]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Dec 10 00:16:45 old-k8s-version-720064 kubelet[6494]: created by net.(*netFD).connect
	Dec 10 00:16:45 old-k8s-version-720064 kubelet[6494]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Dec 10 00:16:45 old-k8s-version-720064 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 10 00:16:45 old-k8s-version-720064 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 00:16:46 old-k8s-version-720064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Dec 10 00:16:46 old-k8s-version-720064 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 10 00:16:46 old-k8s-version-720064 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 10 00:16:46 old-k8s-version-720064 kubelet[6503]: I1210 00:16:46.210191    6503 server.go:416] Version: v1.20.0
	Dec 10 00:16:46 old-k8s-version-720064 kubelet[6503]: I1210 00:16:46.210536    6503 server.go:837] Client rotation is on, will bootstrap in background
	Dec 10 00:16:46 old-k8s-version-720064 kubelet[6503]: I1210 00:16:46.212749    6503 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 10 00:16:46 old-k8s-version-720064 kubelet[6503]: I1210 00:16:46.214169    6503 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Dec 10 00:16:46 old-k8s-version-720064 kubelet[6503]: W1210 00:16:46.214203    6503 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-720064 -n old-k8s-version-720064
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-720064 -n old-k8s-version-720064: exit status 2 (236.935604ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-720064" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (435.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-825613 -n embed-certs-825613
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-10 00:19:53.422403034 +0000 UTC m=+6477.854018156
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-825613 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-825613 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.973µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-825613 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-825613 -n embed-certs-825613
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-825613 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-825613 logs -n 25: (1.079386329s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-030585 sudo crio                             | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-030585                                       | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-866797 | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | disable-driver-mounts-866797                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:52 UTC |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-048296             | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC | 09 Dec 24 23:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-825613            | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC | 09 Dec 24 23:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-825613                                  | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-871210  | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:52 UTC | 09 Dec 24 23:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:52 UTC |                     |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-720064        | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-048296                  | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-825613                 | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC | 10 Dec 24 00:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-825613                                  | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC | 10 Dec 24 00:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-871210       | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:54 UTC | 10 Dec 24 00:04 UTC |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-720064             | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 10 Dec 24 00:19 UTC | 10 Dec 24 00:19 UTC |
	| start   | -p newest-cni-677937 --memory=2200 --alsologtostderr   | newest-cni-677937            | jenkins | v1.34.0 | 10 Dec 24 00:19 UTC | 10 Dec 24 00:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 10 Dec 24 00:19 UTC | 10 Dec 24 00:19 UTC |
	| addons  | enable metrics-server -p newest-cni-677937             | newest-cni-677937            | jenkins | v1.34.0 | 10 Dec 24 00:19 UTC | 10 Dec 24 00:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-677937                                   | newest-cni-677937            | jenkins | v1.34.0 | 10 Dec 24 00:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 00:19:01
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 00:19:01.882939   91113 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:19:01.883055   91113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:19:01.883064   91113 out.go:358] Setting ErrFile to fd 2...
	I1210 00:19:01.883068   91113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:19:01.883276   91113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1210 00:19:01.883962   91113 out.go:352] Setting JSON to false
	I1210 00:19:01.884928   91113 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10893,"bootTime":1733779049,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:19:01.885001   91113 start.go:139] virtualization: kvm guest
	I1210 00:19:01.887267   91113 out.go:177] * [newest-cni-677937] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:19:01.888770   91113 notify.go:220] Checking for updates...
	I1210 00:19:01.888801   91113 out.go:177]   - MINIKUBE_LOCATION=19888
	I1210 00:19:01.890277   91113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:19:01.891689   91113 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1210 00:19:01.892958   91113 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1210 00:19:01.894326   91113 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:19:01.896462   91113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:19:01.898308   91113 config.go:182] Loaded profile config "default-k8s-diff-port-871210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:19:01.898417   91113 config.go:182] Loaded profile config "embed-certs-825613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:19:01.898520   91113 config.go:182] Loaded profile config "no-preload-048296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:19:01.898618   91113 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:19:01.936993   91113 out.go:177] * Using the kvm2 driver based on user configuration
	I1210 00:19:01.938336   91113 start.go:297] selected driver: kvm2
	I1210 00:19:01.938369   91113 start.go:901] validating driver "kvm2" against <nil>
	I1210 00:19:01.938389   91113 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:19:01.939192   91113 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:19:01.939297   91113 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 00:19:01.956466   91113 install.go:137] /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1210 00:19:01.956527   91113 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1210 00:19:01.956583   91113 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 00:19:01.956788   91113 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 00:19:01.956816   91113 cni.go:84] Creating CNI manager for ""
	I1210 00:19:01.956866   91113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:19:01.956875   91113 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 00:19:01.956919   91113 start.go:340] cluster config:
	{Name:newest-cni-677937 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-677937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:19:01.957008   91113 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:19:01.959007   91113 out.go:177] * Starting "newest-cni-677937" primary control-plane node in "newest-cni-677937" cluster
	I1210 00:19:01.960226   91113 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:19:01.960284   91113 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 00:19:01.960296   91113 cache.go:56] Caching tarball of preloaded images
	I1210 00:19:01.960385   91113 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:19:01.960400   91113 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:19:01.960522   91113 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/config.json ...
	I1210 00:19:01.960548   91113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/config.json: {Name:mk9582fa5fc235c2ab303bc9997f26d8ee39b655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:19:01.960737   91113 start.go:360] acquireMachinesLock for newest-cni-677937: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:19:01.960811   91113 start.go:364] duration metric: took 49.981µs to acquireMachinesLock for "newest-cni-677937"
	I1210 00:19:01.960836   91113 start.go:93] Provisioning new machine with config: &{Name:newest-cni-677937 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:newest-cni-677937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:19:01.960928   91113 start.go:125] createHost starting for "" (driver="kvm2")
	I1210 00:19:01.964296   91113 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1210 00:19:01.964454   91113 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:19:01.964489   91113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:19:01.979514   91113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32901
	I1210 00:19:01.980002   91113 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:19:01.980532   91113 main.go:141] libmachine: Using API Version  1
	I1210 00:19:01.980553   91113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:19:01.980910   91113 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:19:01.981130   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetMachineName
	I1210 00:19:01.981250   91113 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:19:01.981363   91113 start.go:159] libmachine.API.Create for "newest-cni-677937" (driver="kvm2")
	I1210 00:19:01.981396   91113 client.go:168] LocalClient.Create starting
	I1210 00:19:01.981433   91113 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem
	I1210 00:19:01.981468   91113 main.go:141] libmachine: Decoding PEM data...
	I1210 00:19:01.981482   91113 main.go:141] libmachine: Parsing certificate...
	I1210 00:19:01.981526   91113 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem
	I1210 00:19:01.981545   91113 main.go:141] libmachine: Decoding PEM data...
	I1210 00:19:01.981556   91113 main.go:141] libmachine: Parsing certificate...
	I1210 00:19:01.981569   91113 main.go:141] libmachine: Running pre-create checks...
	I1210 00:19:01.981578   91113 main.go:141] libmachine: (newest-cni-677937) Calling .PreCreateCheck
	I1210 00:19:01.981932   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetConfigRaw
	I1210 00:19:01.982274   91113 main.go:141] libmachine: Creating machine...
	I1210 00:19:01.982282   91113 main.go:141] libmachine: (newest-cni-677937) Calling .Create
	I1210 00:19:01.982406   91113 main.go:141] libmachine: (newest-cni-677937) Creating KVM machine...
	I1210 00:19:01.983714   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found existing default KVM network
	I1210 00:19:01.985179   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:01.985028   91136 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002700e0}
	I1210 00:19:01.985205   91113 main.go:141] libmachine: (newest-cni-677937) DBG | created network xml: 
	I1210 00:19:01.985220   91113 main.go:141] libmachine: (newest-cni-677937) DBG | <network>
	I1210 00:19:01.985233   91113 main.go:141] libmachine: (newest-cni-677937) DBG |   <name>mk-newest-cni-677937</name>
	I1210 00:19:01.985248   91113 main.go:141] libmachine: (newest-cni-677937) DBG |   <dns enable='no'/>
	I1210 00:19:01.985255   91113 main.go:141] libmachine: (newest-cni-677937) DBG |   
	I1210 00:19:01.985266   91113 main.go:141] libmachine: (newest-cni-677937) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1210 00:19:01.985272   91113 main.go:141] libmachine: (newest-cni-677937) DBG |     <dhcp>
	I1210 00:19:01.985293   91113 main.go:141] libmachine: (newest-cni-677937) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1210 00:19:01.985301   91113 main.go:141] libmachine: (newest-cni-677937) DBG |     </dhcp>
	I1210 00:19:01.985309   91113 main.go:141] libmachine: (newest-cni-677937) DBG |   </ip>
	I1210 00:19:01.985312   91113 main.go:141] libmachine: (newest-cni-677937) DBG |   
	I1210 00:19:01.985345   91113 main.go:141] libmachine: (newest-cni-677937) DBG | </network>
	I1210 00:19:01.985364   91113 main.go:141] libmachine: (newest-cni-677937) DBG | 
	I1210 00:19:01.990916   91113 main.go:141] libmachine: (newest-cni-677937) DBG | trying to create private KVM network mk-newest-cni-677937 192.168.39.0/24...
	I1210 00:19:02.063925   91113 main.go:141] libmachine: (newest-cni-677937) DBG | private KVM network mk-newest-cni-677937 192.168.39.0/24 created
	I1210 00:19:02.063959   91113 main.go:141] libmachine: (newest-cni-677937) Setting up store path in /home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937 ...
	I1210 00:19:02.063977   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:02.063886   91136 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1210 00:19:02.063997   91113 main.go:141] libmachine: (newest-cni-677937) Building disk image from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1210 00:19:02.064016   91113 main.go:141] libmachine: (newest-cni-677937) Downloading /home/jenkins/minikube-integration/19888-18950/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1210 00:19:02.318526   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:02.318385   91136 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa...
	I1210 00:19:02.512744   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:02.512579   91136 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/newest-cni-677937.rawdisk...
	I1210 00:19:02.512782   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Writing magic tar header
	I1210 00:19:02.512805   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Writing SSH key tar header
	I1210 00:19:02.512818   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:02.512772   91136 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937 ...
	I1210 00:19:02.512923   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937
	I1210 00:19:02.512962   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines
	I1210 00:19:02.512992   91113 main.go:141] libmachine: (newest-cni-677937) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937 (perms=drwx------)
	I1210 00:19:02.513007   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1210 00:19:02.513021   91113 main.go:141] libmachine: (newest-cni-677937) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines (perms=drwxr-xr-x)
	I1210 00:19:02.513037   91113 main.go:141] libmachine: (newest-cni-677937) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube (perms=drwxr-xr-x)
	I1210 00:19:02.513051   91113 main.go:141] libmachine: (newest-cni-677937) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950 (perms=drwxrwxr-x)
	I1210 00:19:02.513064   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950
	I1210 00:19:02.513082   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1210 00:19:02.513094   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Checking permissions on dir: /home/jenkins
	I1210 00:19:02.513105   91113 main.go:141] libmachine: (newest-cni-677937) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 00:19:02.513129   91113 main.go:141] libmachine: (newest-cni-677937) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 00:19:02.513141   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Checking permissions on dir: /home
	I1210 00:19:02.513153   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Skipping /home - not owner
	I1210 00:19:02.513166   91113 main.go:141] libmachine: (newest-cni-677937) Creating domain...
	I1210 00:19:02.514270   91113 main.go:141] libmachine: (newest-cni-677937) define libvirt domain using xml: 
	I1210 00:19:02.514307   91113 main.go:141] libmachine: (newest-cni-677937) <domain type='kvm'>
	I1210 00:19:02.514318   91113 main.go:141] libmachine: (newest-cni-677937)   <name>newest-cni-677937</name>
	I1210 00:19:02.514334   91113 main.go:141] libmachine: (newest-cni-677937)   <memory unit='MiB'>2200</memory>
	I1210 00:19:02.514342   91113 main.go:141] libmachine: (newest-cni-677937)   <vcpu>2</vcpu>
	I1210 00:19:02.514356   91113 main.go:141] libmachine: (newest-cni-677937)   <features>
	I1210 00:19:02.514364   91113 main.go:141] libmachine: (newest-cni-677937)     <acpi/>
	I1210 00:19:02.514372   91113 main.go:141] libmachine: (newest-cni-677937)     <apic/>
	I1210 00:19:02.514381   91113 main.go:141] libmachine: (newest-cni-677937)     <pae/>
	I1210 00:19:02.514392   91113 main.go:141] libmachine: (newest-cni-677937)     
	I1210 00:19:02.514403   91113 main.go:141] libmachine: (newest-cni-677937)   </features>
	I1210 00:19:02.514410   91113 main.go:141] libmachine: (newest-cni-677937)   <cpu mode='host-passthrough'>
	I1210 00:19:02.514423   91113 main.go:141] libmachine: (newest-cni-677937)   
	I1210 00:19:02.514433   91113 main.go:141] libmachine: (newest-cni-677937)   </cpu>
	I1210 00:19:02.514442   91113 main.go:141] libmachine: (newest-cni-677937)   <os>
	I1210 00:19:02.514457   91113 main.go:141] libmachine: (newest-cni-677937)     <type>hvm</type>
	I1210 00:19:02.514484   91113 main.go:141] libmachine: (newest-cni-677937)     <boot dev='cdrom'/>
	I1210 00:19:02.514504   91113 main.go:141] libmachine: (newest-cni-677937)     <boot dev='hd'/>
	I1210 00:19:02.514542   91113 main.go:141] libmachine: (newest-cni-677937)     <bootmenu enable='no'/>
	I1210 00:19:02.514563   91113 main.go:141] libmachine: (newest-cni-677937)   </os>
	I1210 00:19:02.514575   91113 main.go:141] libmachine: (newest-cni-677937)   <devices>
	I1210 00:19:02.514588   91113 main.go:141] libmachine: (newest-cni-677937)     <disk type='file' device='cdrom'>
	I1210 00:19:02.514622   91113 main.go:141] libmachine: (newest-cni-677937)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/boot2docker.iso'/>
	I1210 00:19:02.514633   91113 main.go:141] libmachine: (newest-cni-677937)       <target dev='hdc' bus='scsi'/>
	I1210 00:19:02.514643   91113 main.go:141] libmachine: (newest-cni-677937)       <readonly/>
	I1210 00:19:02.514654   91113 main.go:141] libmachine: (newest-cni-677937)     </disk>
	I1210 00:19:02.514669   91113 main.go:141] libmachine: (newest-cni-677937)     <disk type='file' device='disk'>
	I1210 00:19:02.514686   91113 main.go:141] libmachine: (newest-cni-677937)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1210 00:19:02.514703   91113 main.go:141] libmachine: (newest-cni-677937)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/newest-cni-677937.rawdisk'/>
	I1210 00:19:02.514713   91113 main.go:141] libmachine: (newest-cni-677937)       <target dev='hda' bus='virtio'/>
	I1210 00:19:02.514727   91113 main.go:141] libmachine: (newest-cni-677937)     </disk>
	I1210 00:19:02.514737   91113 main.go:141] libmachine: (newest-cni-677937)     <interface type='network'>
	I1210 00:19:02.514750   91113 main.go:141] libmachine: (newest-cni-677937)       <source network='mk-newest-cni-677937'/>
	I1210 00:19:02.514759   91113 main.go:141] libmachine: (newest-cni-677937)       <model type='virtio'/>
	I1210 00:19:02.514775   91113 main.go:141] libmachine: (newest-cni-677937)     </interface>
	I1210 00:19:02.514799   91113 main.go:141] libmachine: (newest-cni-677937)     <interface type='network'>
	I1210 00:19:02.514813   91113 main.go:141] libmachine: (newest-cni-677937)       <source network='default'/>
	I1210 00:19:02.514825   91113 main.go:141] libmachine: (newest-cni-677937)       <model type='virtio'/>
	I1210 00:19:02.514835   91113 main.go:141] libmachine: (newest-cni-677937)     </interface>
	I1210 00:19:02.514846   91113 main.go:141] libmachine: (newest-cni-677937)     <serial type='pty'>
	I1210 00:19:02.514859   91113 main.go:141] libmachine: (newest-cni-677937)       <target port='0'/>
	I1210 00:19:02.514870   91113 main.go:141] libmachine: (newest-cni-677937)     </serial>
	I1210 00:19:02.514889   91113 main.go:141] libmachine: (newest-cni-677937)     <console type='pty'>
	I1210 00:19:02.514917   91113 main.go:141] libmachine: (newest-cni-677937)       <target type='serial' port='0'/>
	I1210 00:19:02.514929   91113 main.go:141] libmachine: (newest-cni-677937)     </console>
	I1210 00:19:02.514939   91113 main.go:141] libmachine: (newest-cni-677937)     <rng model='virtio'>
	I1210 00:19:02.514949   91113 main.go:141] libmachine: (newest-cni-677937)       <backend model='random'>/dev/random</backend>
	I1210 00:19:02.514958   91113 main.go:141] libmachine: (newest-cni-677937)     </rng>
	I1210 00:19:02.514977   91113 main.go:141] libmachine: (newest-cni-677937)     
	I1210 00:19:02.514996   91113 main.go:141] libmachine: (newest-cni-677937)     
	I1210 00:19:02.515008   91113 main.go:141] libmachine: (newest-cni-677937)   </devices>
	I1210 00:19:02.515018   91113 main.go:141] libmachine: (newest-cni-677937) </domain>
	I1210 00:19:02.515033   91113 main.go:141] libmachine: (newest-cni-677937) 
	I1210 00:19:02.519387   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:07:79:9b in network default
	I1210 00:19:02.520037   91113 main.go:141] libmachine: (newest-cni-677937) Ensuring networks are active...
	I1210 00:19:02.520063   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:02.520811   91113 main.go:141] libmachine: (newest-cni-677937) Ensuring network default is active
	I1210 00:19:02.521177   91113 main.go:141] libmachine: (newest-cni-677937) Ensuring network mk-newest-cni-677937 is active
	I1210 00:19:02.521788   91113 main.go:141] libmachine: (newest-cni-677937) Getting domain xml...
	I1210 00:19:02.522701   91113 main.go:141] libmachine: (newest-cni-677937) Creating domain...
	I1210 00:19:03.784070   91113 main.go:141] libmachine: (newest-cni-677937) Waiting to get IP...
	I1210 00:19:03.784739   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:03.785257   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:03.785283   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:03.785213   91136 retry.go:31] will retry after 271.868849ms: waiting for machine to come up
	I1210 00:19:04.058606   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:04.059115   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:04.059145   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:04.059069   91136 retry.go:31] will retry after 296.967378ms: waiting for machine to come up
	I1210 00:19:04.357546   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:04.358014   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:04.358050   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:04.357969   91136 retry.go:31] will retry after 318.242447ms: waiting for machine to come up
	I1210 00:19:04.677589   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:04.677991   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:04.678036   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:04.677958   91136 retry.go:31] will retry after 578.593134ms: waiting for machine to come up
	I1210 00:19:05.258479   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:05.258989   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:05.259017   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:05.258962   91136 retry.go:31] will retry after 698.184483ms: waiting for machine to come up
	I1210 00:19:05.958995   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:05.959620   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:05.959647   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:05.959554   91136 retry.go:31] will retry after 600.420589ms: waiting for machine to come up
	I1210 00:19:06.561175   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:06.561618   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:06.561645   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:06.561575   91136 retry.go:31] will retry after 969.556201ms: waiting for machine to come up
	I1210 00:19:07.533276   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:07.533812   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:07.533843   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:07.533771   91136 retry.go:31] will retry after 1.474217483s: waiting for machine to come up
	I1210 00:19:09.328570   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:09.328963   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:09.328993   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:09.328928   91136 retry.go:31] will retry after 1.688546592s: waiting for machine to come up
	I1210 00:19:11.019755   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:11.020228   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:11.020260   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:11.020153   91136 retry.go:31] will retry after 2.320679231s: waiting for machine to come up
	I1210 00:19:13.342170   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:13.342765   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:13.342796   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:13.342723   91136 retry.go:31] will retry after 1.914085257s: waiting for machine to come up
	I1210 00:19:15.259860   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:15.260315   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:15.260343   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:15.260247   91136 retry.go:31] will retry after 2.222826983s: waiting for machine to come up
	I1210 00:19:17.484664   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:17.485115   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:17.485145   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:17.485067   91136 retry.go:31] will retry after 4.250537543s: waiting for machine to come up
	I1210 00:19:21.740569   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:21.740960   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:21.740985   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:21.740902   91136 retry.go:31] will retry after 5.428122223s: waiting for machine to come up
	I1210 00:19:27.170498   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:27.171063   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has current primary IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:27.171093   91113 main.go:141] libmachine: (newest-cni-677937) Found IP for machine: 192.168.39.239
	I1210 00:19:27.171108   91113 main.go:141] libmachine: (newest-cni-677937) Reserving static IP address...
	I1210 00:19:27.171543   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find host DHCP lease matching {name: "newest-cni-677937", mac: "52:54:00:d7:ab:8b", ip: "192.168.39.239"} in network mk-newest-cni-677937
	I1210 00:19:27.248257   91113 main.go:141] libmachine: (newest-cni-677937) Reserved static IP address: 192.168.39.239
	I1210 00:19:27.248302   91113 main.go:141] libmachine: (newest-cni-677937) Waiting for SSH to be available...
	I1210 00:19:27.248313   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Getting to WaitForSSH function...
	I1210 00:19:27.250759   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:27.251142   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:16 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:19:27.251165   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:27.251284   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Using SSH client type: external
	I1210 00:19:27.251319   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa (-rw-------)
	I1210 00:19:27.251350   91113 main.go:141] libmachine: (newest-cni-677937) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:19:27.251371   91113 main.go:141] libmachine: (newest-cni-677937) DBG | About to run SSH command:
	I1210 00:19:27.251382   91113 main.go:141] libmachine: (newest-cni-677937) DBG | exit 0
	I1210 00:19:27.379734   91113 main.go:141] libmachine: (newest-cni-677937) DBG | SSH cmd err, output: <nil>: 
	I1210 00:19:27.380063   91113 main.go:141] libmachine: (newest-cni-677937) KVM machine creation complete!
	I1210 00:19:27.380435   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetConfigRaw
	I1210 00:19:27.380922   91113 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:19:27.381094   91113 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:19:27.381290   91113 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1210 00:19:27.381304   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetState
	I1210 00:19:27.382829   91113 main.go:141] libmachine: Detecting operating system of created instance...
	I1210 00:19:27.382846   91113 main.go:141] libmachine: Waiting for SSH to be available...
	I1210 00:19:27.382852   91113 main.go:141] libmachine: Getting to WaitForSSH function...
	I1210 00:19:27.382858   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:19:27.384800   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:27.385143   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:16 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:19:27.385173   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:27.385341   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:19:27.385516   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:19:27.385657   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:19:27.385784   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:19:27.385939   91113 main.go:141] libmachine: Using SSH client type: native
	I1210 00:19:27.386183   91113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I1210 00:19:27.386200   91113 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1210 00:19:27.498642   91113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:19:27.498684   91113 main.go:141] libmachine: Detecting the provisioner...
	I1210 00:19:27.498693   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:19:27.501399   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:27.501751   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:16 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:19:27.501771   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:27.501893   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:19:27.502109   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:19:27.502266   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:19:27.502396   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:19:27.502569   91113 main.go:141] libmachine: Using SSH client type: native
	I1210 00:19:27.502761   91113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I1210 00:19:27.502774   91113 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1210 00:19:27.616110   91113 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1210 00:19:27.616198   91113 main.go:141] libmachine: found compatible host: buildroot
	I1210 00:19:27.616212   91113 main.go:141] libmachine: Provisioning with buildroot...
	I1210 00:19:27.616224   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetMachineName
	I1210 00:19:27.616516   91113 buildroot.go:166] provisioning hostname "newest-cni-677937"
	I1210 00:19:27.616540   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetMachineName
	I1210 00:19:27.616756   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:19:27.619341   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:27.619803   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:16 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:19:27.619832   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:27.620004   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:19:27.620195   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:19:27.620340   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:19:27.620484   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:19:27.620632   91113 main.go:141] libmachine: Using SSH client type: native
	I1210 00:19:27.620833   91113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I1210 00:19:27.620852   91113 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-677937 && echo "newest-cni-677937" | sudo tee /etc/hostname
	I1210 00:19:27.749227   91113 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-677937
	
	I1210 00:19:27.749250   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:19:27.751962   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:27.752325   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:16 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:19:27.752358   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:27.752522   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:19:27.752738   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:19:27.752926   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:19:27.753080   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:19:27.753283   91113 main.go:141] libmachine: Using SSH client type: native
	I1210 00:19:27.753528   91113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I1210 00:19:27.753554   91113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-677937' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-677937/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-677937' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:19:27.877654   91113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:19:27.877683   91113 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1210 00:19:27.877701   91113 buildroot.go:174] setting up certificates
	I1210 00:19:27.877710   91113 provision.go:84] configureAuth start
	I1210 00:19:27.877719   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetMachineName
	I1210 00:19:27.877987   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetIP
	I1210 00:19:27.880566   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:27.880881   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:16 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:19:27.880909   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:27.881095   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:19:27.883009   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:27.883274   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:16 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:19:27.883293   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:27.883431   91113 provision.go:143] copyHostCerts
	I1210 00:19:27.883489   91113 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1210 00:19:27.883512   91113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1210 00:19:27.883625   91113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1210 00:19:27.883727   91113 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1210 00:19:27.883737   91113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1210 00:19:27.883766   91113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1210 00:19:27.883822   91113 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1210 00:19:27.883830   91113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1210 00:19:27.883851   91113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1210 00:19:27.883898   91113 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.newest-cni-677937 san=[127.0.0.1 192.168.39.239 localhost minikube newest-cni-677937]
	I1210 00:19:28.030057   91113 provision.go:177] copyRemoteCerts
	I1210 00:19:28.030119   91113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:19:28.030142   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:19:28.032767   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:28.033025   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:16 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:19:28.033051   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:28.033232   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:19:28.033410   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:19:28.033562   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:19:28.033712   91113 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa Username:docker}
	I1210 00:19:28.121048   91113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 00:19:28.144838   91113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 00:19:28.168132   91113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 00:19:28.191195   91113 provision.go:87] duration metric: took 313.46864ms to configureAuth
	I1210 00:19:28.191226   91113 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:19:28.191403   91113 config.go:182] Loaded profile config "newest-cni-677937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:19:28.191494   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:19:28.194286   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:28.194674   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:16 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:19:28.194709   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:28.194971   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:19:28.195181   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:19:28.195376   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:19:28.195532   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:19:28.195725   91113 main.go:141] libmachine: Using SSH client type: native
	I1210 00:19:28.195881   91113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I1210 00:19:28.195898   91113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:19:28.426887   91113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:19:28.426923   91113 main.go:141] libmachine: Checking connection to Docker...
	I1210 00:19:28.426933   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetURL
	I1210 00:19:28.428286   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Using libvirt version 6000000
	I1210 00:19:28.430744   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:28.431111   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:16 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:19:28.431136   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:28.431330   91113 main.go:141] libmachine: Docker is up and running!
	I1210 00:19:28.431348   91113 main.go:141] libmachine: Reticulating splines...
	I1210 00:19:28.431355   91113 client.go:171] duration metric: took 26.449952036s to LocalClient.Create
	I1210 00:19:28.431376   91113 start.go:167] duration metric: took 26.45001582s to libmachine.API.Create "newest-cni-677937"
	I1210 00:19:28.431386   91113 start.go:293] postStartSetup for "newest-cni-677937" (driver="kvm2")
	I1210 00:19:28.431396   91113 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:19:28.431412   91113 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:19:28.431668   91113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:19:28.431691   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:19:28.433871   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:28.434155   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:16 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:19:28.434183   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:28.434274   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:19:28.434424   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:19:28.434559   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:19:28.434713   91113 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa Username:docker}
	I1210 00:19:28.521445   91113 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:19:28.525274   91113 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:19:28.525302   91113 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1210 00:19:28.525367   91113 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1210 00:19:28.525439   91113 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1210 00:19:28.525531   91113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:19:28.534250   91113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1210 00:19:28.557031   91113 start.go:296] duration metric: took 125.632347ms for postStartSetup
	I1210 00:19:28.557085   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetConfigRaw
	I1210 00:19:28.557740   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetIP
	I1210 00:19:28.560540   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:28.560933   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:16 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:19:28.560969   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:28.561220   91113 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/config.json ...
	I1210 00:19:28.561399   91113 start.go:128] duration metric: took 26.60046017s to createHost
	I1210 00:19:28.561421   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:19:28.563749   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:28.564067   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:16 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:19:28.564097   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:28.564269   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:19:28.564424   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:19:28.564603   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:19:28.564739   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:19:28.564889   91113 main.go:141] libmachine: Using SSH client type: native
	I1210 00:19:28.565108   91113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I1210 00:19:28.565123   91113 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:19:28.680145   91113 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733789968.659644129
	
	I1210 00:19:28.680167   91113 fix.go:216] guest clock: 1733789968.659644129
	I1210 00:19:28.680174   91113 fix.go:229] Guest: 2024-12-10 00:19:28.659644129 +0000 UTC Remote: 2024-12-10 00:19:28.56141111 +0000 UTC m=+26.719377523 (delta=98.233019ms)
	I1210 00:19:28.680213   91113 fix.go:200] guest clock delta is within tolerance: 98.233019ms
	I1210 00:19:28.680218   91113 start.go:83] releasing machines lock for "newest-cni-677937", held for 26.719395875s
	I1210 00:19:28.680236   91113 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:19:28.680548   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetIP
	I1210 00:19:28.683245   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:28.683631   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:16 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:19:28.683670   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:28.683839   91113 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:19:28.684332   91113 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:19:28.684505   91113 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:19:28.684574   91113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:19:28.684628   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:19:28.684747   91113 ssh_runner.go:195] Run: cat /version.json
	I1210 00:19:28.684772   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:19:28.687231   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:28.687528   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:28.687576   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:16 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:19:28.687615   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:28.687721   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:19:28.687872   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:19:28.688018   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:16 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:19:28.688040   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:28.688048   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:19:28.688277   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:19:28.688269   91113 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa Username:docker}
	I1210 00:19:28.688434   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:19:28.688562   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:19:28.688687   91113 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa Username:docker}
	I1210 00:19:28.768025   91113 ssh_runner.go:195] Run: systemctl --version
	I1210 00:19:28.792224   91113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:19:28.951187   91113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:19:28.958029   91113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:19:28.958091   91113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:19:28.974287   91113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 00:19:28.974314   91113 start.go:495] detecting cgroup driver to use...
	I1210 00:19:28.974387   91113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:19:28.992573   91113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:19:29.007235   91113 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:19:29.007283   91113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:19:29.021033   91113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:19:29.035166   91113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:19:29.152673   91113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:19:29.309884   91113 docker.go:233] disabling docker service ...
	I1210 00:19:29.309958   91113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:19:29.323752   91113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:19:29.336227   91113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:19:29.466524   91113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:19:29.595997   91113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:19:29.609301   91113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:19:29.627174   91113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 00:19:29.627249   91113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:19:29.636924   91113 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:19:29.636995   91113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:19:29.646955   91113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:19:29.656991   91113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:19:29.667005   91113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:19:29.677934   91113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:19:29.688150   91113 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:19:29.706502   91113 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:19:29.716207   91113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:19:29.725196   91113 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 00:19:29.725263   91113 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 00:19:29.737663   91113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:19:29.747256   91113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:19:29.872574   91113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:19:29.966588   91113 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:19:29.966667   91113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:19:29.971081   91113 start.go:563] Will wait 60s for crictl version
	I1210 00:19:29.971146   91113 ssh_runner.go:195] Run: which crictl
	I1210 00:19:29.974614   91113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:19:30.009212   91113 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:19:30.009308   91113 ssh_runner.go:195] Run: crio --version
	I1210 00:19:30.035506   91113 ssh_runner.go:195] Run: crio --version
	I1210 00:19:30.065595   91113 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:19:30.066974   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetIP
	I1210 00:19:30.069696   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:30.070002   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:16 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:19:30.070026   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:30.070189   91113 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 00:19:30.074223   91113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:19:30.088114   91113 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 00:19:30.089549   91113 kubeadm.go:883] updating cluster {Name:newest-cni-677937 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:newest-cni-677937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 00:19:30.089695   91113 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:19:30.089773   91113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:19:30.120802   91113 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 00:19:30.120876   91113 ssh_runner.go:195] Run: which lz4
	I1210 00:19:30.124774   91113 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 00:19:30.128725   91113 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 00:19:30.128758   91113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 00:19:31.374343   91113 crio.go:462] duration metric: took 1.249596945s to copy over tarball
	I1210 00:19:31.374416   91113 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 00:19:33.471763   91113 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.097319289s)
	I1210 00:19:33.471793   91113 crio.go:469] duration metric: took 2.097421446s to extract the tarball
	I1210 00:19:33.471801   91113 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 00:19:33.508881   91113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:19:33.557726   91113 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:19:33.557746   91113 cache_images.go:84] Images are preloaded, skipping loading
	I1210 00:19:33.557753   91113 kubeadm.go:934] updating node { 192.168.39.239 8443 v1.31.2 crio true true} ...
	I1210 00:19:33.557839   91113 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-677937 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-677937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:19:33.557900   91113 ssh_runner.go:195] Run: crio config
	I1210 00:19:33.601281   91113 cni.go:84] Creating CNI manager for ""
	I1210 00:19:33.601309   91113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:19:33.601320   91113 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1210 00:19:33.601344   91113 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.239 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-677937 NodeName:newest-cni-677937 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.39.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:19:33.601472   91113 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-677937"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.239"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.239"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:19:33.601529   91113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:19:33.611293   91113 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:19:33.611372   91113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 00:19:33.620693   91113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1210 00:19:33.638063   91113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:19:33.654391   91113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2487 bytes)
	I1210 00:19:33.678738   91113 ssh_runner.go:195] Run: grep 192.168.39.239	control-plane.minikube.internal$ /etc/hosts
	I1210 00:19:33.682354   91113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.239	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:19:33.694232   91113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:19:33.802513   91113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:19:33.818177   91113 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937 for IP: 192.168.39.239
	I1210 00:19:33.818198   91113 certs.go:194] generating shared ca certs ...
	I1210 00:19:33.818212   91113 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:19:33.818384   91113 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1210 00:19:33.818430   91113 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1210 00:19:33.818444   91113 certs.go:256] generating profile certs ...
	I1210 00:19:33.818506   91113 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/client.key
	I1210 00:19:33.818525   91113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/client.crt with IP's: []
	I1210 00:19:33.981258   91113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/client.crt ...
	I1210 00:19:33.981289   91113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/client.crt: {Name:mkbb1d88ee0d7ac0b4146f895b4f8188605c18d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:19:33.981474   91113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/client.key ...
	I1210 00:19:33.981485   91113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/client.key: {Name:mk042ceaa389d06aef0b1a7a6ca333fc2f7074ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:19:33.981559   91113 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/apiserver.key.ad66389d
	I1210 00:19:33.981574   91113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/apiserver.crt.ad66389d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.239]
	I1210 00:19:34.120814   91113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/apiserver.crt.ad66389d ...
	I1210 00:19:34.120840   91113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/apiserver.crt.ad66389d: {Name:mk07cd50295ccdfbfbea60c97239417846ce436d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:19:34.121001   91113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/apiserver.key.ad66389d ...
	I1210 00:19:34.121014   91113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/apiserver.key.ad66389d: {Name:mk0910674e199a91ac373806917582dedaedbec3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:19:34.121085   91113 certs.go:381] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/apiserver.crt.ad66389d -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/apiserver.crt
	I1210 00:19:34.121171   91113 certs.go:385] copying /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/apiserver.key.ad66389d -> /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/apiserver.key
	I1210 00:19:34.121224   91113 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/proxy-client.key
	I1210 00:19:34.121239   91113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/proxy-client.crt with IP's: []
	I1210 00:19:34.277289   91113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/proxy-client.crt ...
	I1210 00:19:34.277321   91113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/proxy-client.crt: {Name:mk09c3254e1d654fa8cbed5925196c4314d19933 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:19:34.277483   91113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/proxy-client.key ...
	I1210 00:19:34.277496   91113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/proxy-client.key: {Name:mka4bfa30495115ba242aaf9752195e39c129dea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:19:34.277661   91113 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1210 00:19:34.277695   91113 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1210 00:19:34.277705   91113 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:19:34.277730   91113 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1210 00:19:34.277754   91113 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:19:34.277776   91113 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1210 00:19:34.277813   91113 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1210 00:19:34.278436   91113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:19:34.304980   91113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 00:19:34.327436   91113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:19:34.349663   91113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:19:34.372453   91113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 00:19:34.395990   91113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 00:19:34.419074   91113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:19:34.441770   91113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 00:19:34.465113   91113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1210 00:19:34.491052   91113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1210 00:19:34.515168   91113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:19:34.538227   91113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:19:34.556875   91113 ssh_runner.go:195] Run: openssl version
	I1210 00:19:34.562933   91113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:19:34.575971   91113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:19:34.583405   91113 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:19:34.583479   91113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:19:34.593480   91113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:19:34.610260   91113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1210 00:19:34.626747   91113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1210 00:19:34.631360   91113 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1210 00:19:34.631428   91113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1210 00:19:34.637097   91113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1210 00:19:34.647904   91113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1210 00:19:34.658616   91113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1210 00:19:34.663077   91113 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1210 00:19:34.663125   91113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1210 00:19:34.668583   91113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:19:34.679192   91113 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:19:34.682943   91113 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 00:19:34.683002   91113 kubeadm.go:392] StartCluster: {Name:newest-cni-677937 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:newest-cni-677937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:19:34.683085   91113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:19:34.683141   91113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:19:34.718690   91113 cri.go:89] found id: ""
	I1210 00:19:34.718778   91113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 00:19:34.729022   91113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:19:34.738623   91113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:19:34.748738   91113 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:19:34.748766   91113 kubeadm.go:157] found existing configuration files:
	
	I1210 00:19:34.748822   91113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:19:34.758793   91113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:19:34.758844   91113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:19:34.768520   91113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:19:34.777929   91113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:19:34.778002   91113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:19:34.788006   91113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:19:34.797158   91113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:19:34.797220   91113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:19:34.806518   91113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:19:34.815198   91113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:19:34.815244   91113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:19:34.824502   91113 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:19:34.920829   91113 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 00:19:34.920893   91113 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:19:35.027867   91113 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:19:35.028003   91113 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:19:35.028115   91113 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 00:19:35.036897   91113 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:19:35.104826   91113 out.go:235]   - Generating certificates and keys ...
	I1210 00:19:35.104969   91113 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:19:35.105060   91113 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:19:35.274831   91113 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 00:19:35.366573   91113 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1210 00:19:35.541944   91113 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1210 00:19:35.650394   91113 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1210 00:19:36.131592   91113 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1210 00:19:36.131752   91113 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-677937] and IPs [192.168.39.239 127.0.0.1 ::1]
	I1210 00:19:36.410634   91113 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1210 00:19:36.410936   91113 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-677937] and IPs [192.168.39.239 127.0.0.1 ::1]
	I1210 00:19:36.469383   91113 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 00:19:36.623048   91113 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 00:19:37.012663   91113 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1210 00:19:37.013202   91113 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:19:37.109519   91113 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:19:37.261151   91113 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 00:19:37.305222   91113 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:19:37.534436   91113 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:19:37.613516   91113 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:19:37.614211   91113 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:19:37.617286   91113 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:19:37.655640   91113 out.go:235]   - Booting up control plane ...
	I1210 00:19:37.655836   91113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:19:37.655937   91113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:19:37.656023   91113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:19:37.656161   91113 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:19:37.656297   91113 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:19:37.656366   91113 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:19:37.784137   91113 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 00:19:37.784290   91113 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 00:19:38.285588   91113 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.696229ms
	I1210 00:19:38.285703   91113 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 00:19:42.786385   91113 kubeadm.go:310] [api-check] The API server is healthy after 4.501651752s
	I1210 00:19:42.797347   91113 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 00:19:42.816380   91113 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 00:19:42.846132   91113 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 00:19:42.846432   91113 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-677937 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 00:19:42.858150   91113 kubeadm.go:310] [bootstrap-token] Using token: j2vgil.cjuivjgp5bqbc3kw
	I1210 00:19:42.859622   91113 out.go:235]   - Configuring RBAC rules ...
	I1210 00:19:42.859770   91113 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 00:19:42.877407   91113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 00:19:42.887868   91113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 00:19:42.891241   91113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 00:19:42.894397   91113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 00:19:42.897250   91113 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 00:19:43.190777   91113 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 00:19:43.619240   91113 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 00:19:44.190627   91113 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 00:19:44.191489   91113 kubeadm.go:310] 
	I1210 00:19:44.191610   91113 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 00:19:44.191633   91113 kubeadm.go:310] 
	I1210 00:19:44.191768   91113 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 00:19:44.191779   91113 kubeadm.go:310] 
	I1210 00:19:44.191815   91113 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 00:19:44.191896   91113 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 00:19:44.191992   91113 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 00:19:44.192010   91113 kubeadm.go:310] 
	I1210 00:19:44.192103   91113 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 00:19:44.192124   91113 kubeadm.go:310] 
	I1210 00:19:44.192199   91113 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 00:19:44.192211   91113 kubeadm.go:310] 
	I1210 00:19:44.192287   91113 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 00:19:44.192400   91113 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 00:19:44.192513   91113 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 00:19:44.192531   91113 kubeadm.go:310] 
	I1210 00:19:44.192639   91113 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 00:19:44.192757   91113 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 00:19:44.192767   91113 kubeadm.go:310] 
	I1210 00:19:44.192893   91113 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j2vgil.cjuivjgp5bqbc3kw \
	I1210 00:19:44.193047   91113 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b \
	I1210 00:19:44.193077   91113 kubeadm.go:310] 	--control-plane 
	I1210 00:19:44.193084   91113 kubeadm.go:310] 
	I1210 00:19:44.193212   91113 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 00:19:44.193222   91113 kubeadm.go:310] 
	I1210 00:19:44.193352   91113 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j2vgil.cjuivjgp5bqbc3kw \
	I1210 00:19:44.193473   91113 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b 
	I1210 00:19:44.194038   91113 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:19:44.194114   91113 cni.go:84] Creating CNI manager for ""
	I1210 00:19:44.194128   91113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:19:44.196643   91113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:19:44.197907   91113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:19:44.210098   91113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:19:44.229477   91113 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:19:44.229551   91113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:19:44.229558   91113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-677937 minikube.k8s.io/updated_at=2024_12_10T00_19_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=newest-cni-677937 minikube.k8s.io/primary=true
	I1210 00:19:44.410383   91113 ops.go:34] apiserver oom_adj: -16
	I1210 00:19:44.410638   91113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:19:44.910946   91113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:19:45.411668   91113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:19:45.911615   91113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:19:46.411540   91113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:19:46.911684   91113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:19:47.411714   91113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:19:47.910994   91113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:19:48.411402   91113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:19:48.495947   91113 kubeadm.go:1113] duration metric: took 4.266460758s to wait for elevateKubeSystemPrivileges
	I1210 00:19:48.495991   91113 kubeadm.go:394] duration metric: took 13.812990994s to StartCluster
	I1210 00:19:48.496014   91113 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:19:48.496106   91113 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1210 00:19:48.498141   91113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:19:48.498370   91113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 00:19:48.498361   91113 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:19:48.498385   91113 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:19:48.498480   91113 addons.go:69] Setting default-storageclass=true in profile "newest-cni-677937"
	I1210 00:19:48.498468   91113 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-677937"
	I1210 00:19:48.498530   91113 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-677937"
	I1210 00:19:48.498586   91113 host.go:66] Checking if "newest-cni-677937" exists ...
	I1210 00:19:48.498499   91113 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-677937"
	I1210 00:19:48.498628   91113 config.go:182] Loaded profile config "newest-cni-677937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:19:48.499068   91113 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:19:48.499074   91113 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:19:48.499107   91113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:19:48.499115   91113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:19:48.500060   91113 out.go:177] * Verifying Kubernetes components...
	I1210 00:19:48.501608   91113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:19:48.514823   91113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38955
	I1210 00:19:48.514831   91113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
	I1210 00:19:48.515293   91113 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:19:48.515339   91113 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:19:48.515780   91113 main.go:141] libmachine: Using API Version  1
	I1210 00:19:48.515805   91113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:19:48.515918   91113 main.go:141] libmachine: Using API Version  1
	I1210 00:19:48.515949   91113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:19:48.516177   91113 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:19:48.516261   91113 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:19:48.516379   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetState
	I1210 00:19:48.516739   91113 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:19:48.516779   91113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:19:48.519478   91113 addons.go:234] Setting addon default-storageclass=true in "newest-cni-677937"
	I1210 00:19:48.519523   91113 host.go:66] Checking if "newest-cni-677937" exists ...
	I1210 00:19:48.519925   91113 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:19:48.519971   91113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:19:48.534480   91113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I1210 00:19:48.534912   91113 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:19:48.535365   91113 main.go:141] libmachine: Using API Version  1
	I1210 00:19:48.535390   91113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:19:48.535784   91113 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:19:48.536451   91113 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:19:48.536502   91113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:19:48.537764   91113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38349
	I1210 00:19:48.538148   91113 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:19:48.538693   91113 main.go:141] libmachine: Using API Version  1
	I1210 00:19:48.538723   91113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:19:48.539113   91113 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:19:48.539335   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetState
	I1210 00:19:48.541208   91113 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:19:48.543322   91113 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:19:48.544903   91113 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:19:48.544923   91113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:19:48.544941   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:19:48.548086   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:48.548632   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:16 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:19:48.548671   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:48.548791   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:19:48.548998   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:19:48.549162   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:19:48.549276   91113 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa Username:docker}
	I1210 00:19:48.554731   91113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35119
	I1210 00:19:48.555168   91113 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:19:48.555637   91113 main.go:141] libmachine: Using API Version  1
	I1210 00:19:48.555662   91113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:19:48.556031   91113 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:19:48.556228   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetState
	I1210 00:19:48.557981   91113 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:19:48.558219   91113 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:19:48.558242   91113 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:19:48.558263   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:19:48.561006   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:48.561460   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:16 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:19:48.561478   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:48.561687   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:19:48.561883   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:19:48.562024   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:19:48.562171   91113 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa Username:docker}
	I1210 00:19:48.698712   91113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 00:19:48.725870   91113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:19:48.885348   91113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:19:48.899535   91113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:19:49.244150   91113 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1210 00:19:49.245291   91113 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:19:49.245364   91113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:19:49.750085   91113 main.go:141] libmachine: Making call to close driver server
	I1210 00:19:49.750118   91113 main.go:141] libmachine: (newest-cni-677937) Calling .Close
	I1210 00:19:49.750144   91113 main.go:141] libmachine: Making call to close driver server
	I1210 00:19:49.750157   91113 main.go:141] libmachine: (newest-cni-677937) Calling .Close
	I1210 00:19:49.750220   91113 api_server.go:72] duration metric: took 1.25176568s to wait for apiserver process to appear ...
	I1210 00:19:49.750251   91113 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:19:49.750275   91113 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I1210 00:19:49.750922   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Closing plugin on server side
	I1210 00:19:49.750924   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Closing plugin on server side
	I1210 00:19:49.750959   91113 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:19:49.750967   91113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:19:49.750976   91113 main.go:141] libmachine: Making call to close driver server
	I1210 00:19:49.750990   91113 main.go:141] libmachine: (newest-cni-677937) Calling .Close
	I1210 00:19:49.751018   91113 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:19:49.751041   91113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:19:49.751052   91113 main.go:141] libmachine: Making call to close driver server
	I1210 00:19:49.751075   91113 main.go:141] libmachine: (newest-cni-677937) Calling .Close
	I1210 00:19:49.751224   91113 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:19:49.751242   91113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:19:49.751268   91113 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:19:49.751281   91113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:19:49.755798   91113 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-677937" context rescaled to 1 replicas
	I1210 00:19:49.765250   91113 api_server.go:279] https://192.168.39.239:8443/healthz returned 200:
	ok
	I1210 00:19:49.766417   91113 api_server.go:141] control plane version: v1.31.2
	I1210 00:19:49.766440   91113 api_server.go:131] duration metric: took 16.181406ms to wait for apiserver health ...
	I1210 00:19:49.766458   91113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:19:49.774892   91113 main.go:141] libmachine: Making call to close driver server
	I1210 00:19:49.774915   91113 main.go:141] libmachine: (newest-cni-677937) Calling .Close
	I1210 00:19:49.775197   91113 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:19:49.775217   91113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:19:49.777101   91113 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1210 00:19:49.778325   91113 addons.go:510] duration metric: took 1.279941822s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 00:19:49.791757   91113 system_pods.go:59] 8 kube-system pods found
	I1210 00:19:49.791798   91113 system_pods.go:61] "coredns-7c65d6cfc9-npft9" [eb96fb57-3d5f-43d6-8a1a-8e4535b50c0f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 00:19:49.791808   91113 system_pods.go:61] "coredns-7c65d6cfc9-xjz2d" [0124d307-d837-4486-a840-0ebc723a746f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 00:19:49.791819   91113 system_pods.go:61] "etcd-newest-cni-677937" [a903b60a-73f3-4b18-ad5a-e0d3bbec547f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 00:19:49.791830   91113 system_pods.go:61] "kube-apiserver-newest-cni-677937" [005658fb-51cb-4b9f-ad38-a499ce8a6978] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 00:19:49.791842   91113 system_pods.go:61] "kube-controller-manager-newest-cni-677937" [638f1ba9-6afa-485b-bbc8-f57310572be1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 00:19:49.791852   91113 system_pods.go:61] "kube-proxy-hqfx2" [cd0206ac-0332-43f4-8506-7ce295b8baf9] Running
	I1210 00:19:49.791865   91113 system_pods.go:61] "kube-scheduler-newest-cni-677937" [67a52ae6-49b2-438d-9df7-f97ef0266216] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 00:19:49.791874   91113 system_pods.go:61] "storage-provisioner" [19d7c825-e7cb-4bd6-bea4-e838828bbdf8] Pending
	I1210 00:19:49.791885   91113 system_pods.go:74] duration metric: took 25.419524ms to wait for pod list to return data ...
	I1210 00:19:49.791897   91113 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:19:49.801584   91113 default_sa.go:45] found service account: "default"
	I1210 00:19:49.801609   91113 default_sa.go:55] duration metric: took 9.703504ms for default service account to be created ...
	I1210 00:19:49.801621   91113 kubeadm.go:582] duration metric: took 1.30317189s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 00:19:49.801635   91113 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:19:49.812071   91113 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:19:49.812103   91113 node_conditions.go:123] node cpu capacity is 2
	I1210 00:19:49.812113   91113 node_conditions.go:105] duration metric: took 10.474249ms to run NodePressure ...
	I1210 00:19:49.812125   91113 start.go:241] waiting for startup goroutines ...
	I1210 00:19:49.812134   91113 start.go:246] waiting for cluster config update ...
	I1210 00:19:49.812146   91113 start.go:255] writing updated cluster config ...
	I1210 00:19:49.812415   91113 ssh_runner.go:195] Run: rm -f paused
	I1210 00:19:49.876418   91113 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:19:49.878572   91113 out.go:177] * Done! kubectl is now configured to use "newest-cni-677937" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 00:19:53 embed-certs-825613 crio[689]: time="2024-12-10 00:19:53.971832931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789993971808000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8bc0e0d-24cb-4b28-83fd-ee59ad8810ef name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:19:53 embed-certs-825613 crio[689]: time="2024-12-10 00:19:53.972443469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88ea291c-1699-49e4-8474-19bf119f3729 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:53 embed-certs-825613 crio[689]: time="2024-12-10 00:19:53.972571022Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88ea291c-1699-49e4-8474-19bf119f3729 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:53 embed-certs-825613 crio[689]: time="2024-12-10 00:19:53.972897040Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7,PodSandboxId:2e0732572e8db1a23d92676b78887b86796aa4e9a205e92e43f12b5924d4563c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733788778622267808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5cabae9-bb71-4b5e-9a43-0dce4a0733a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d601f9e42631cd298cf241017ee607bd6dbf69fe78c62e3ebaa7145777b4e692,PodSandboxId:9058ab0a3a0d5c88566c9b0a1fcfd2211b1ae51f298823e2237bd15927c1cb2e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733788757643541081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 26d6fcf4-98a5-4f18-a823-b7b7ec824711,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71,PodSandboxId:f37397cbb45679df27a16597bcdf2406bdf0ead341764319962fb99089a4e879,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733788755456314249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvtlr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef5707d-5f06-46d0-809b-da79f79c49e5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1,PodSandboxId:2e0732572e8db1a23d92676b78887b86796aa4e9a205e92e43f12b5924d4563c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733788747751358728,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e5cabae9-bb71-4b5e-9a43-0dce4a0733a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994,PodSandboxId:9cbf654ac355f91a513e7db1e6577837c9b14c6d1c04b2ca3ba7a0c6ce45284a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733788747759104647,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rn6fg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6db02558-bfa6-4c5f-a120-aed13575b
273,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538,PodSandboxId:c2b66ef16a899ea5d355e9b3b1b65828d2e66f3b273db5a7309a51b051fee831,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733788744181097853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515071d8bf56c0
1b07af6d39852c2e11,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de,PodSandboxId:506e0e7ee92a7bd8873715c778d39d892fe1cb40b0ea85e427371641227754d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733788744177234994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcff97bb4f6a0cb96f1e10113a
260f34,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9,PodSandboxId:49f894377a222370019ddf4a558b814e79f0dec55facbff6954ead3e82919eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733788744156377984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823ae58953ea2563864dd528b6e2b2ba,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7,PodSandboxId:1ea97f8d6c2da0292213660bb42711c450ee9b0ab7057cb07c94bc586c84321e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733788744153273810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6860c965e2f880a34f73a811ad70073e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88ea291c-1699-49e4-8474-19bf119f3729 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.012160722Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f17cc0aa-d355-4d26-a2a8-3952f4f4461a name=/runtime.v1.RuntimeService/Version
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.012253676Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f17cc0aa-d355-4d26-a2a8-3952f4f4461a name=/runtime.v1.RuntimeService/Version
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.013395747Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=541ec531-1f6b-4514-afef-1904fd160eac name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.013826802Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789994013805279,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=541ec531-1f6b-4514-afef-1904fd160eac name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.014300370Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29ef108f-b95d-4ac7-8d21-62b2cd47a078 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.014355440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29ef108f-b95d-4ac7-8d21-62b2cd47a078 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.014582027Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7,PodSandboxId:2e0732572e8db1a23d92676b78887b86796aa4e9a205e92e43f12b5924d4563c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733788778622267808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5cabae9-bb71-4b5e-9a43-0dce4a0733a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d601f9e42631cd298cf241017ee607bd6dbf69fe78c62e3ebaa7145777b4e692,PodSandboxId:9058ab0a3a0d5c88566c9b0a1fcfd2211b1ae51f298823e2237bd15927c1cb2e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733788757643541081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 26d6fcf4-98a5-4f18-a823-b7b7ec824711,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71,PodSandboxId:f37397cbb45679df27a16597bcdf2406bdf0ead341764319962fb99089a4e879,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733788755456314249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvtlr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef5707d-5f06-46d0-809b-da79f79c49e5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1,PodSandboxId:2e0732572e8db1a23d92676b78887b86796aa4e9a205e92e43f12b5924d4563c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733788747751358728,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e5cabae9-bb71-4b5e-9a43-0dce4a0733a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994,PodSandboxId:9cbf654ac355f91a513e7db1e6577837c9b14c6d1c04b2ca3ba7a0c6ce45284a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733788747759104647,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rn6fg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6db02558-bfa6-4c5f-a120-aed13575b
273,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538,PodSandboxId:c2b66ef16a899ea5d355e9b3b1b65828d2e66f3b273db5a7309a51b051fee831,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733788744181097853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515071d8bf56c0
1b07af6d39852c2e11,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de,PodSandboxId:506e0e7ee92a7bd8873715c778d39d892fe1cb40b0ea85e427371641227754d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733788744177234994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcff97bb4f6a0cb96f1e10113a
260f34,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9,PodSandboxId:49f894377a222370019ddf4a558b814e79f0dec55facbff6954ead3e82919eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733788744156377984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823ae58953ea2563864dd528b6e2b2ba,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7,PodSandboxId:1ea97f8d6c2da0292213660bb42711c450ee9b0ab7057cb07c94bc586c84321e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733788744153273810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6860c965e2f880a34f73a811ad70073e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29ef108f-b95d-4ac7-8d21-62b2cd47a078 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.050236368Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61e37238-088a-4715-8270-a02d7fd13693 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.050304819Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61e37238-088a-4715-8270-a02d7fd13693 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.051328922Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a394108a-d4c1-4172-bdcd-3e51499c6118 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.051767903Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789994051744686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a394108a-d4c1-4172-bdcd-3e51499c6118 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.052302443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d8bd771-9e1c-49bc-a441-68599145ec17 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.052371003Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d8bd771-9e1c-49bc-a441-68599145ec17 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.052602319Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7,PodSandboxId:2e0732572e8db1a23d92676b78887b86796aa4e9a205e92e43f12b5924d4563c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733788778622267808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5cabae9-bb71-4b5e-9a43-0dce4a0733a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d601f9e42631cd298cf241017ee607bd6dbf69fe78c62e3ebaa7145777b4e692,PodSandboxId:9058ab0a3a0d5c88566c9b0a1fcfd2211b1ae51f298823e2237bd15927c1cb2e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733788757643541081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 26d6fcf4-98a5-4f18-a823-b7b7ec824711,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71,PodSandboxId:f37397cbb45679df27a16597bcdf2406bdf0ead341764319962fb99089a4e879,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733788755456314249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvtlr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef5707d-5f06-46d0-809b-da79f79c49e5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1,PodSandboxId:2e0732572e8db1a23d92676b78887b86796aa4e9a205e92e43f12b5924d4563c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733788747751358728,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e5cabae9-bb71-4b5e-9a43-0dce4a0733a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994,PodSandboxId:9cbf654ac355f91a513e7db1e6577837c9b14c6d1c04b2ca3ba7a0c6ce45284a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733788747759104647,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rn6fg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6db02558-bfa6-4c5f-a120-aed13575b
273,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538,PodSandboxId:c2b66ef16a899ea5d355e9b3b1b65828d2e66f3b273db5a7309a51b051fee831,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733788744181097853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515071d8bf56c0
1b07af6d39852c2e11,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de,PodSandboxId:506e0e7ee92a7bd8873715c778d39d892fe1cb40b0ea85e427371641227754d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733788744177234994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcff97bb4f6a0cb96f1e10113a
260f34,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9,PodSandboxId:49f894377a222370019ddf4a558b814e79f0dec55facbff6954ead3e82919eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733788744156377984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823ae58953ea2563864dd528b6e2b2ba,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7,PodSandboxId:1ea97f8d6c2da0292213660bb42711c450ee9b0ab7057cb07c94bc586c84321e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733788744153273810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6860c965e2f880a34f73a811ad70073e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d8bd771-9e1c-49bc-a441-68599145ec17 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.083759355Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2798d860-2761-4610-bb94-da6da90207c5 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.083843260Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2798d860-2761-4610-bb94-da6da90207c5 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.084857363Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1cb86368-e145-47bd-991b-4ba09b0bdc2b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.085275056Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789994085224440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1cb86368-e145-47bd-991b-4ba09b0bdc2b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.085799957Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82858a4b-a584-43c6-8bca-ecebc9877bd1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.085868048Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82858a4b-a584-43c6-8bca-ecebc9877bd1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:54 embed-certs-825613 crio[689]: time="2024-12-10 00:19:54.086066565Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7,PodSandboxId:2e0732572e8db1a23d92676b78887b86796aa4e9a205e92e43f12b5924d4563c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733788778622267808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5cabae9-bb71-4b5e-9a43-0dce4a0733a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d601f9e42631cd298cf241017ee607bd6dbf69fe78c62e3ebaa7145777b4e692,PodSandboxId:9058ab0a3a0d5c88566c9b0a1fcfd2211b1ae51f298823e2237bd15927c1cb2e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733788757643541081,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 26d6fcf4-98a5-4f18-a823-b7b7ec824711,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71,PodSandboxId:f37397cbb45679df27a16597bcdf2406bdf0ead341764319962fb99089a4e879,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733788755456314249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvtlr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef5707d-5f06-46d0-809b-da79f79c49e5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1,PodSandboxId:2e0732572e8db1a23d92676b78887b86796aa4e9a205e92e43f12b5924d4563c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733788747751358728,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e5cabae9-bb71-4b5e-9a43-0dce4a0733a1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994,PodSandboxId:9cbf654ac355f91a513e7db1e6577837c9b14c6d1c04b2ca3ba7a0c6ce45284a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733788747759104647,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rn6fg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6db02558-bfa6-4c5f-a120-aed13575b
273,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538,PodSandboxId:c2b66ef16a899ea5d355e9b3b1b65828d2e66f3b273db5a7309a51b051fee831,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733788744181097853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515071d8bf56c0
1b07af6d39852c2e11,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de,PodSandboxId:506e0e7ee92a7bd8873715c778d39d892fe1cb40b0ea85e427371641227754d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733788744177234994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcff97bb4f6a0cb96f1e10113a
260f34,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9,PodSandboxId:49f894377a222370019ddf4a558b814e79f0dec55facbff6954ead3e82919eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733788744156377984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823ae58953ea2563864dd528b6e2b2ba,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7,PodSandboxId:1ea97f8d6c2da0292213660bb42711c450ee9b0ab7057cb07c94bc586c84321e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733788744153273810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-825613,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6860c965e2f880a34f73a811ad70073e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82858a4b-a584-43c6-8bca-ecebc9877bd1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b794fd5af2249       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   2e0732572e8db       storage-provisioner
	d601f9e42631c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   9058ab0a3a0d5       busybox
	db9231487d25e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      20 minutes ago      Running             coredns                   1                   f37397cbb4567       coredns-7c65d6cfc9-qvtlr
	a17d14690e81c       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      20 minutes ago      Running             kube-proxy                1                   9cbf654ac355f       kube-proxy-rn6fg
	e6a287aaa2bb1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   2e0732572e8db       storage-provisioner
	a8a1911851cba       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      20 minutes ago      Running             kube-controller-manager   1                   c2b66ef16a899       kube-controller-manager-embed-certs-825613
	f251f2ec97259       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      20 minutes ago      Running             kube-scheduler            1                   506e0e7ee92a7       kube-scheduler-embed-certs-825613
	c641220f93efe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      20 minutes ago      Running             etcd                      1                   49f894377a222       etcd-embed-certs-825613
	07b6833b28b16       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      20 minutes ago      Running             kube-apiserver            1                   1ea97f8d6c2da       kube-apiserver-embed-certs-825613
	
	
	==> coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:42846 - 8639 "HINFO IN 2605694768704771407.3649347858089209996. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.038925999s
	
	
	==> describe nodes <==
	Name:               embed-certs-825613
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-825613
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=embed-certs-825613
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T23_50_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 23:50:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-825613
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:19:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:14:54 +0000   Mon, 09 Dec 2024 23:50:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:14:54 +0000   Mon, 09 Dec 2024 23:50:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:14:54 +0000   Mon, 09 Dec 2024 23:50:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:14:54 +0000   Mon, 09 Dec 2024 23:59:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.19
	  Hostname:    embed-certs-825613
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e5553bfc98ff4251b26fadf70ee93ead
	  System UUID:                e5553bfc-98ff-4251-b26f-adf70ee93ead
	  Boot ID:                    3d98bdcb-9f0e-42c7-a111-540ec74aef73
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-qvtlr                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-embed-certs-825613                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-embed-certs-825613             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-embed-certs-825613    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-rn6fg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-embed-certs-825613             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-hg7c5               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node embed-certs-825613 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node embed-certs-825613 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node embed-certs-825613 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-825613 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-825613 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-825613 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node embed-certs-825613 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-825613 event: Registered Node embed-certs-825613 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-825613 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-825613 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-825613 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-825613 event: Registered Node embed-certs-825613 in Controller
	
	
	==> dmesg <==
	[Dec 9 23:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048892] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036533] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.833942] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.970754] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.547841] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.986658] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.064094] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063172] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.200745] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.100670] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.270843] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +4.039153] systemd-fstab-generator[772]: Ignoring "noauto" option for root device
	[Dec 9 23:59] systemd-fstab-generator[894]: Ignoring "noauto" option for root device
	[  +0.061094] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.504702] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.444205] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	[  +3.278360] kauditd_printk_skb: 64 callbacks suppressed
	[ +25.240683] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] <==
	{"level":"info","ts":"2024-12-09T23:59:23.420756Z","caller":"traceutil/trace.go:171","msg":"trace[1263770303] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:624; }","duration":"221.12025ms","start":"2024-12-09T23:59:23.199625Z","end":"2024-12-09T23:59:23.420745Z","steps":["trace[1263770303] 'agreement among raft nodes before linearized reading'  (duration: 221.05274ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:59:23.420944Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.631501ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-825613\" ","response":"range_response_count:1 size:4533"}
	{"level":"info","ts":"2024-12-09T23:59:23.421206Z","caller":"traceutil/trace.go:171","msg":"trace[1455931536] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-embed-certs-825613; range_end:; response_count:1; response_revision:624; }","duration":"108.890958ms","start":"2024-12-09T23:59:23.312303Z","end":"2024-12-09T23:59:23.421194Z","steps":["trace[1455931536] 'agreement among raft nodes before linearized reading'  (duration: 108.609634ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:59:23.952762Z","caller":"traceutil/trace.go:171","msg":"trace[1671691036] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"520.173296ms","start":"2024-12-09T23:59:23.432573Z","end":"2024-12-09T23:59:23.952746Z","steps":["trace[1671691036] 'process raft request'  (duration: 519.718959ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:59:23.952892Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T23:59:23.432555Z","time spent":"520.28452ms","remote":"127.0.0.1:36292","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4326,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-825613\" mod_revision:624 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-825613\" value_size:4258 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-825613\" > >"}
	{"level":"info","ts":"2024-12-09T23:59:23.952409Z","caller":"traceutil/trace.go:171","msg":"trace[69263747] linearizableReadLoop","detail":"{readStateIndex:670; appliedIndex:669; }","duration":"428.076212ms","start":"2024-12-09T23:59:23.524321Z","end":"2024-12-09T23:59:23.952397Z","steps":["trace[69263747] 'read index received'  (duration: 427.902534ms)","trace[69263747] 'applied index is now lower than readState.Index'  (duration: 173.205µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T23:59:23.953523Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.452198ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-825613\" ","response":"range_response_count:1 size:4341"}
	{"level":"info","ts":"2024-12-09T23:59:23.953565Z","caller":"traceutil/trace.go:171","msg":"trace[457871556] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-embed-certs-825613; range_end:; response_count:1; response_revision:625; }","duration":"141.498717ms","start":"2024-12-09T23:59:23.812057Z","end":"2024-12-09T23:59:23.953556Z","steps":["trace[457871556] 'agreement among raft nodes before linearized reading'  (duration: 141.431414ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:59:43.832508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.861654ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12394631499418903644 > lease_revoke:<id:2c0293addcbb23d4>","response":"size:28"}
	{"level":"info","ts":"2024-12-10T00:00:00.481927Z","caller":"traceutil/trace.go:171","msg":"trace[1606349903] linearizableReadLoop","detail":"{readStateIndex:711; appliedIndex:710; }","duration":"283.177375ms","start":"2024-12-10T00:00:00.198735Z","end":"2024-12-10T00:00:00.481913Z","steps":["trace[1606349903] 'read index received'  (duration: 283.019224ms)","trace[1606349903] 'applied index is now lower than readState.Index'  (duration: 157.548µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-10T00:00:00.482244Z","caller":"traceutil/trace.go:171","msg":"trace[644595571] transaction","detail":"{read_only:false; response_revision:658; number_of_response:1; }","duration":"308.033134ms","start":"2024-12-10T00:00:00.174199Z","end":"2024-12-10T00:00:00.482232Z","steps":["trace[644595571] 'process raft request'  (duration: 307.595037ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-10T00:00:00.482372Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-10T00:00:00.174182Z","time spent":"308.120337ms","remote":"127.0.0.1:36282","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:654 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-12-10T00:00:00.482571Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"283.828056ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-10T00:00:00.482633Z","caller":"traceutil/trace.go:171","msg":"trace[1900332290] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:658; }","duration":"283.893994ms","start":"2024-12-10T00:00:00.198730Z","end":"2024-12-10T00:00:00.482624Z","steps":["trace[1900332290] 'agreement among raft nodes before linearized reading'  (duration: 283.814839ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-10T00:00:08.593933Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.832249ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-hg7c5\" ","response":"range_response_count:1 size:4384"}
	{"level":"info","ts":"2024-12-10T00:00:08.594260Z","caller":"traceutil/trace.go:171","msg":"trace[401372728] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-hg7c5; range_end:; response_count:1; response_revision:666; }","duration":"128.168658ms","start":"2024-12-10T00:00:08.466077Z","end":"2024-12-10T00:00:08.594246Z","steps":["trace[401372728] 'range keys from in-memory index tree'  (duration: 127.676583ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-10T00:09:05.820537Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":868}
	{"level":"info","ts":"2024-12-10T00:09:05.830778Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":868,"took":"9.890796ms","hash":2296338318,"current-db-size-bytes":2629632,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2629632,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-12-10T00:09:05.830937Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2296338318,"revision":868,"compact-revision":-1}
	{"level":"info","ts":"2024-12-10T00:14:05.829559Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1111}
	{"level":"info","ts":"2024-12-10T00:14:05.833227Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1111,"took":"3.149133ms","hash":2359914749,"current-db-size-bytes":2629632,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1589248,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-10T00:14:05.833324Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2359914749,"revision":1111,"compact-revision":868}
	{"level":"info","ts":"2024-12-10T00:19:05.836894Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1354}
	{"level":"info","ts":"2024-12-10T00:19:05.840693Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1354,"took":"3.111249ms","hash":1650892285,"current-db-size-bytes":2629632,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1560576,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-10T00:19:05.840795Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1650892285,"revision":1354,"compact-revision":1111}
	
	
	==> kernel <==
	 00:19:54 up 21 min,  0 users,  load average: 0.07, 0.10, 0.10
	Linux embed-certs-825613 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] <==
	I1210 00:15:08.032169       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 00:15:08.032290       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 00:17:08.032696       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:17:08.032755       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1210 00:17:08.032703       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:17:08.032833       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 00:17:08.033994       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 00:17:08.034028       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 00:19:07.032707       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:19:07.032822       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1210 00:19:08.035185       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:19:08.035293       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1210 00:19:08.035308       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:19:08.035329       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 00:19:08.036664       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 00:19:08.036744       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] <==
	E1210 00:14:40.632412       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:14:41.210460       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 00:14:54.743957       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-825613"
	E1210 00:15:10.638287       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:15:11.218911       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 00:15:30.416437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="122.638µs"
	E1210 00:15:40.643454       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:15:41.226118       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 00:15:44.412274       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="113.532µs"
	E1210 00:16:10.649145       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:16:11.232576       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:16:40.655449       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:16:41.240237       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:17:10.661336       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:17:11.247331       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:17:40.667963       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:17:41.254138       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:18:10.673207       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:18:11.264623       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:18:40.679291       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:18:41.271956       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:19:10.685587       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:19:11.279785       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:19:40.691719       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:19:41.287210       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 23:59:07.944840       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 23:59:07.963596       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.19"]
	E1209 23:59:07.963750       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 23:59:08.006910       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 23:59:08.006996       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 23:59:08.007049       1 server_linux.go:169] "Using iptables Proxier"
	I1209 23:59:08.009177       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 23:59:08.009411       1 server.go:483] "Version info" version="v1.31.2"
	I1209 23:59:08.009589       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:59:08.010770       1 config.go:199] "Starting service config controller"
	I1209 23:59:08.010820       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 23:59:08.010866       1 config.go:105] "Starting endpoint slice config controller"
	I1209 23:59:08.010882       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 23:59:08.011338       1 config.go:328] "Starting node config controller"
	I1209 23:59:08.011374       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 23:59:08.111145       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 23:59:08.111283       1 shared_informer.go:320] Caches are synced for service config
	I1209 23:59:08.111826       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] <==
	I1209 23:59:04.894213       1 serving.go:386] Generated self-signed cert in-memory
	W1209 23:59:06.947120       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 23:59:06.947157       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 23:59:06.947213       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 23:59:06.947222       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 23:59:07.053355       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1209 23:59:07.053429       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:59:07.082056       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1209 23:59:07.078190       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 23:59:07.082818       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 23:59:07.083198       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 23:59:07.183067       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 00:18:47 embed-certs-825613 kubelet[901]: E1210 00:18:47.398646     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hg7c5" podUID="2a657b1b-4435-42b5-aef2-deebf7865c83"
	Dec 10 00:18:52 embed-certs-825613 kubelet[901]: E1210 00:18:52.638361     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789932637302220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:18:52 embed-certs-825613 kubelet[901]: E1210 00:18:52.638399     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789932637302220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:18:59 embed-certs-825613 kubelet[901]: E1210 00:18:59.399235     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hg7c5" podUID="2a657b1b-4435-42b5-aef2-deebf7865c83"
	Dec 10 00:19:02 embed-certs-825613 kubelet[901]: E1210 00:19:02.413801     901 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 10 00:19:02 embed-certs-825613 kubelet[901]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 10 00:19:02 embed-certs-825613 kubelet[901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 10 00:19:02 embed-certs-825613 kubelet[901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 00:19:02 embed-certs-825613 kubelet[901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 00:19:02 embed-certs-825613 kubelet[901]: E1210 00:19:02.640511     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789942640104936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:19:02 embed-certs-825613 kubelet[901]: E1210 00:19:02.640593     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789942640104936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:19:12 embed-certs-825613 kubelet[901]: E1210 00:19:12.642577     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789952641968927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:19:12 embed-certs-825613 kubelet[901]: E1210 00:19:12.642615     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789952641968927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:19:14 embed-certs-825613 kubelet[901]: E1210 00:19:14.398901     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hg7c5" podUID="2a657b1b-4435-42b5-aef2-deebf7865c83"
	Dec 10 00:19:22 embed-certs-825613 kubelet[901]: E1210 00:19:22.644228     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789962643909818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:19:22 embed-certs-825613 kubelet[901]: E1210 00:19:22.644677     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789962643909818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:19:27 embed-certs-825613 kubelet[901]: E1210 00:19:27.398458     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hg7c5" podUID="2a657b1b-4435-42b5-aef2-deebf7865c83"
	Dec 10 00:19:32 embed-certs-825613 kubelet[901]: E1210 00:19:32.648721     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789972648074839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:19:32 embed-certs-825613 kubelet[901]: E1210 00:19:32.648802     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789972648074839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:19:40 embed-certs-825613 kubelet[901]: E1210 00:19:40.398599     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hg7c5" podUID="2a657b1b-4435-42b5-aef2-deebf7865c83"
	Dec 10 00:19:42 embed-certs-825613 kubelet[901]: E1210 00:19:42.650839     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789982650355466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:19:42 embed-certs-825613 kubelet[901]: E1210 00:19:42.651226     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789982650355466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:19:52 embed-certs-825613 kubelet[901]: E1210 00:19:52.656096     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789992654696591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:19:52 embed-certs-825613 kubelet[901]: E1210 00:19:52.657259     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789992654696591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:19:54 embed-certs-825613 kubelet[901]: E1210 00:19:54.398389     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hg7c5" podUID="2a657b1b-4435-42b5-aef2-deebf7865c83"
	
	
	==> storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] <==
	I1209 23:59:38.733523       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 23:59:38.744625       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 23:59:38.744742       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 23:59:56.145961       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 23:59:56.146265       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-825613_c134b788-3d5c-4519-a5bc-6e8be741a5d7!
	I1209 23:59:56.148163       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bd7f2198-abd8-43b4-9ad3-a2585364fc90", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-825613_c134b788-3d5c-4519-a5bc-6e8be741a5d7 became leader
	I1209 23:59:56.246592       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-825613_c134b788-3d5c-4519-a5bc-6e8be741a5d7!
	
	
	==> storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] <==
	I1209 23:59:07.855371       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1209 23:59:37.858715       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-825613 -n embed-certs-825613
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-825613 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-hg7c5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-825613 describe pod metrics-server-6867b74b74-hg7c5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-825613 describe pod metrics-server-6867b74b74-hg7c5: exit status 1 (61.872552ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-hg7c5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-825613 describe pod metrics-server-6867b74b74-hg7c5: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (435.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (420.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-871210 -n default-k8s-diff-port-871210
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-10 00:20:39.105048946 +0000 UTC m=+6523.536664051
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-871210 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-871210 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.841µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-871210 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-871210 -n default-k8s-diff-port-871210
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-871210 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-871210 logs -n 25: (1.289140692s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-825613            | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC | 09 Dec 24 23:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-825613                                  | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-871210  | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:52 UTC | 09 Dec 24 23:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:52 UTC |                     |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-720064        | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-048296                  | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-825613                 | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC | 10 Dec 24 00:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-825613                                  | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC | 10 Dec 24 00:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-871210       | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:54 UTC | 10 Dec 24 00:04 UTC |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-720064             | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 10 Dec 24 00:19 UTC | 10 Dec 24 00:19 UTC |
	| start   | -p newest-cni-677937 --memory=2200 --alsologtostderr   | newest-cni-677937            | jenkins | v1.34.0 | 10 Dec 24 00:19 UTC | 10 Dec 24 00:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 10 Dec 24 00:19 UTC | 10 Dec 24 00:19 UTC |
	| addons  | enable metrics-server -p newest-cni-677937             | newest-cni-677937            | jenkins | v1.34.0 | 10 Dec 24 00:19 UTC | 10 Dec 24 00:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-677937                                   | newest-cni-677937            | jenkins | v1.34.0 | 10 Dec 24 00:19 UTC | 10 Dec 24 00:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-825613                                  | embed-certs-825613           | jenkins | v1.34.0 | 10 Dec 24 00:19 UTC | 10 Dec 24 00:19 UTC |
	| addons  | enable dashboard -p newest-cni-677937                  | newest-cni-677937            | jenkins | v1.34.0 | 10 Dec 24 00:20 UTC | 10 Dec 24 00:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-677937 --memory=2200 --alsologtostderr   | newest-cni-677937            | jenkins | v1.34.0 | 10 Dec 24 00:20 UTC | 10 Dec 24 00:20 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-677937 image list                           | newest-cni-677937            | jenkins | v1.34.0 | 10 Dec 24 00:20 UTC | 10 Dec 24 00:20 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-677937                                   | newest-cni-677937            | jenkins | v1.34.0 | 10 Dec 24 00:20 UTC | 10 Dec 24 00:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 00:20:02
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 00:20:02.543022   92058 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:20:02.543165   92058 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:20:02.543176   92058 out.go:358] Setting ErrFile to fd 2...
	I1210 00:20:02.543182   92058 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:20:02.543383   92058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1210 00:20:02.543947   92058 out.go:352] Setting JSON to false
	I1210 00:20:02.544880   92058 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10954,"bootTime":1733779049,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:20:02.544981   92058 start.go:139] virtualization: kvm guest
	I1210 00:20:02.547230   92058 out.go:177] * [newest-cni-677937] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:20:02.548624   92058 out.go:177]   - MINIKUBE_LOCATION=19888
	I1210 00:20:02.548655   92058 notify.go:220] Checking for updates...
	I1210 00:20:02.551053   92058 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:20:02.552366   92058 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1210 00:20:02.553540   92058 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1210 00:20:02.554845   92058 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:20:02.556087   92058 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:20:02.557784   92058 config.go:182] Loaded profile config "newest-cni-677937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:20:02.558397   92058 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:20:02.558463   92058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:20:02.573444   92058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44285
	I1210 00:20:02.573942   92058 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:20:02.574497   92058 main.go:141] libmachine: Using API Version  1
	I1210 00:20:02.574520   92058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:20:02.574998   92058 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:20:02.575190   92058 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:20:02.575480   92058 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:20:02.575829   92058 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:20:02.575886   92058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:20:02.591393   92058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42991
	I1210 00:20:02.591852   92058 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:20:02.592395   92058 main.go:141] libmachine: Using API Version  1
	I1210 00:20:02.592424   92058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:20:02.592790   92058 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:20:02.592996   92058 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:20:02.631455   92058 out.go:177] * Using the kvm2 driver based on existing profile
	I1210 00:20:02.632787   92058 start.go:297] selected driver: kvm2
	I1210 00:20:02.632800   92058 start.go:901] validating driver "kvm2" against &{Name:newest-cni-677937 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:newest-cni-677937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:20:02.632895   92058 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:20:02.633611   92058 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:20:02.633686   92058 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 00:20:02.649951   92058 install.go:137] /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1210 00:20:02.650387   92058 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 00:20:02.650422   92058 cni.go:84] Creating CNI manager for ""
	I1210 00:20:02.650469   92058 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:20:02.650520   92058 start.go:340] cluster config:
	{Name:newest-cni-677937 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-677937 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:20:02.650665   92058 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:20:02.653094   92058 out.go:177] * Starting "newest-cni-677937" primary control-plane node in "newest-cni-677937" cluster
	I1210 00:20:02.654273   92058 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:20:02.654305   92058 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 00:20:02.654311   92058 cache.go:56] Caching tarball of preloaded images
	I1210 00:20:02.654407   92058 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:20:02.654422   92058 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:20:02.654956   92058 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/config.json ...
	I1210 00:20:02.655631   92058 start.go:360] acquireMachinesLock for newest-cni-677937: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:20:02.655712   92058 start.go:364] duration metric: took 48.254µs to acquireMachinesLock for "newest-cni-677937"
	I1210 00:20:02.655734   92058 start.go:96] Skipping create...Using existing machine configuration
	I1210 00:20:02.655741   92058 fix.go:54] fixHost starting: 
	I1210 00:20:02.656405   92058 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:20:02.656440   92058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:20:02.671343   92058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I1210 00:20:02.671819   92058 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:20:02.672315   92058 main.go:141] libmachine: Using API Version  1
	I1210 00:20:02.672342   92058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:20:02.672667   92058 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:20:02.672869   92058 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:20:02.673027   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetState
	I1210 00:20:02.674545   92058 fix.go:112] recreateIfNeeded on newest-cni-677937: state=Stopped err=<nil>
	I1210 00:20:02.674572   92058 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	W1210 00:20:02.674708   92058 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 00:20:02.677246   92058 out.go:177] * Restarting existing kvm2 VM for "newest-cni-677937" ...
	I1210 00:20:02.678393   92058 main.go:141] libmachine: (newest-cni-677937) Calling .Start
	I1210 00:20:02.678547   92058 main.go:141] libmachine: (newest-cni-677937) Ensuring networks are active...
	I1210 00:20:02.679248   92058 main.go:141] libmachine: (newest-cni-677937) Ensuring network default is active
	I1210 00:20:02.679593   92058 main.go:141] libmachine: (newest-cni-677937) Ensuring network mk-newest-cni-677937 is active
	I1210 00:20:02.680024   92058 main.go:141] libmachine: (newest-cni-677937) Getting domain xml...
	I1210 00:20:02.680807   92058 main.go:141] libmachine: (newest-cni-677937) Creating domain...
	I1210 00:20:03.925086   92058 main.go:141] libmachine: (newest-cni-677937) Waiting to get IP...
	I1210 00:20:03.926163   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:03.926626   92058 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:20:03.926706   92058 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:20:03.926598   92094 retry.go:31] will retry after 291.950787ms: waiting for machine to come up
	I1210 00:20:04.220247   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:04.220812   92058 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:20:04.220845   92058 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:20:04.220731   92094 retry.go:31] will retry after 274.36798ms: waiting for machine to come up
	I1210 00:20:04.497298   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:04.497759   92058 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:20:04.497791   92058 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:20:04.497729   92094 retry.go:31] will retry after 400.465553ms: waiting for machine to come up
	I1210 00:20:04.899233   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:04.899691   92058 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:20:04.899723   92058 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:20:04.899666   92094 retry.go:31] will retry after 528.249846ms: waiting for machine to come up
	I1210 00:20:05.429361   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:05.429813   92058 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:20:05.429838   92058 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:20:05.429780   92094 retry.go:31] will retry after 681.259096ms: waiting for machine to come up
	I1210 00:20:06.112786   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:06.113195   92058 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:20:06.113223   92058 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:20:06.113142   92094 retry.go:31] will retry after 844.076521ms: waiting for machine to come up
	I1210 00:20:06.959112   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:06.959613   92058 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:20:06.959643   92058 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:20:06.959594   92094 retry.go:31] will retry after 724.131777ms: waiting for machine to come up
	I1210 00:20:07.685261   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:07.685699   92058 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:20:07.685729   92058 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:20:07.685641   92094 retry.go:31] will retry after 990.57777ms: waiting for machine to come up
	I1210 00:20:08.677880   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:08.678336   92058 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:20:08.678376   92058 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:20:08.678285   92094 retry.go:31] will retry after 1.149566098s: waiting for machine to come up
	I1210 00:20:09.829485   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:09.829987   92058 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:20:09.830018   92058 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:20:09.829933   92094 retry.go:31] will retry after 2.194749432s: waiting for machine to come up
	I1210 00:20:12.026161   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:12.026654   92058 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:20:12.026683   92058 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:20:12.026616   92094 retry.go:31] will retry after 2.909211055s: waiting for machine to come up
	I1210 00:20:14.936959   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:14.937480   92058 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:20:14.937506   92058 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:20:14.937420   92094 retry.go:31] will retry after 2.294115615s: waiting for machine to come up
	I1210 00:20:17.234826   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:17.235189   92058 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:20:17.235214   92058 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:20:17.235163   92094 retry.go:31] will retry after 3.375392966s: waiting for machine to come up
	I1210 00:20:20.614204   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:20.614675   92058 main.go:141] libmachine: (newest-cni-677937) Found IP for machine: 192.168.39.239
	I1210 00:20:20.614691   92058 main.go:141] libmachine: (newest-cni-677937) Reserving static IP address...
	I1210 00:20:20.614708   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has current primary IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:20.615135   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "newest-cni-677937", mac: "52:54:00:d7:ab:8b", ip: "192.168.39.239"} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:20.615173   92058 main.go:141] libmachine: (newest-cni-677937) DBG | skip adding static IP to network mk-newest-cni-677937 - found existing host DHCP lease matching {name: "newest-cni-677937", mac: "52:54:00:d7:ab:8b", ip: "192.168.39.239"}
	I1210 00:20:20.615188   92058 main.go:141] libmachine: (newest-cni-677937) Reserved static IP address: 192.168.39.239
	I1210 00:20:20.615204   92058 main.go:141] libmachine: (newest-cni-677937) Waiting for SSH to be available...
	I1210 00:20:20.615217   92058 main.go:141] libmachine: (newest-cni-677937) DBG | Getting to WaitForSSH function...
	I1210 00:20:20.617540   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:20.617908   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:20.617936   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:20.618049   92058 main.go:141] libmachine: (newest-cni-677937) DBG | Using SSH client type: external
	I1210 00:20:20.618068   92058 main.go:141] libmachine: (newest-cni-677937) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa (-rw-------)
	I1210 00:20:20.618094   92058 main.go:141] libmachine: (newest-cni-677937) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:20:20.618102   92058 main.go:141] libmachine: (newest-cni-677937) DBG | About to run SSH command:
	I1210 00:20:20.618110   92058 main.go:141] libmachine: (newest-cni-677937) DBG | exit 0
	I1210 00:20:20.743751   92058 main.go:141] libmachine: (newest-cni-677937) DBG | SSH cmd err, output: <nil>: 
	I1210 00:20:20.744097   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetConfigRaw
	I1210 00:20:20.744739   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetIP
	I1210 00:20:20.747479   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:20.747814   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:20.747845   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:20.748078   92058 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/config.json ...
	I1210 00:20:20.748392   92058 machine.go:93] provisionDockerMachine start ...
	I1210 00:20:20.748416   92058 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:20:20.748663   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:20:20.751023   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:20.751327   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:20.751363   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:20.751536   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:20:20.751735   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:20:20.751926   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:20:20.752064   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:20:20.752214   92058 main.go:141] libmachine: Using SSH client type: native
	I1210 00:20:20.752416   92058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I1210 00:20:20.752431   92058 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 00:20:20.871918   92058 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 00:20:20.871949   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetMachineName
	I1210 00:20:20.872206   92058 buildroot.go:166] provisioning hostname "newest-cni-677937"
	I1210 00:20:20.872228   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetMachineName
	I1210 00:20:20.872386   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:20:20.875066   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:20.875493   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:20.875521   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:20.875664   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:20:20.875884   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:20:20.876068   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:20:20.876223   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:20:20.876383   92058 main.go:141] libmachine: Using SSH client type: native
	I1210 00:20:20.876561   92058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I1210 00:20:20.876576   92058 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-677937 && echo "newest-cni-677937" | sudo tee /etc/hostname
	I1210 00:20:20.993431   92058 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-677937
	
	I1210 00:20:20.993466   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:20:20.996133   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:20.996412   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:20.996453   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:20.996612   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:20:20.996772   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:20:20.996911   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:20:20.997037   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:20:20.997161   92058 main.go:141] libmachine: Using SSH client type: native
	I1210 00:20:20.997329   92058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I1210 00:20:20.997345   92058 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-677937' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-677937/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-677937' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:20:21.112061   92058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:20:21.112087   92058 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1210 00:20:21.112105   92058 buildroot.go:174] setting up certificates
	I1210 00:20:21.112114   92058 provision.go:84] configureAuth start
	I1210 00:20:21.112122   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetMachineName
	I1210 00:20:21.112426   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetIP
	I1210 00:20:21.115110   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:21.115462   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:21.115490   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:21.115677   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:20:21.117815   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:21.118101   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:21.118133   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:21.118233   92058 provision.go:143] copyHostCerts
	I1210 00:20:21.118292   92058 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1210 00:20:21.118306   92058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1210 00:20:21.118371   92058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1210 00:20:21.118456   92058 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1210 00:20:21.118464   92058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1210 00:20:21.118490   92058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1210 00:20:21.118584   92058 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1210 00:20:21.118592   92058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1210 00:20:21.118614   92058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1210 00:20:21.118662   92058 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.newest-cni-677937 san=[127.0.0.1 192.168.39.239 localhost minikube newest-cni-677937]
	I1210 00:20:21.281014   92058 provision.go:177] copyRemoteCerts
	I1210 00:20:21.281075   92058 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:20:21.281106   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:20:21.283625   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:21.283902   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:21.283936   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:21.284076   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:20:21.284278   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:20:21.284452   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:20:21.284589   92058 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa Username:docker}
	I1210 00:20:21.370086   92058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 00:20:21.393457   92058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 00:20:21.417238   92058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 00:20:21.439011   92058 provision.go:87] duration metric: took 326.887295ms to configureAuth
	I1210 00:20:21.439038   92058 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:20:21.439204   92058 config.go:182] Loaded profile config "newest-cni-677937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:20:21.439288   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:20:21.441892   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:21.442334   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:21.442361   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:21.442593   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:20:21.442792   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:20:21.442946   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:20:21.443116   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:20:21.443287   92058 main.go:141] libmachine: Using SSH client type: native
	I1210 00:20:21.443446   92058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I1210 00:20:21.443461   92058 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:20:21.664542   92058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:20:21.664574   92058 machine.go:96] duration metric: took 916.16416ms to provisionDockerMachine
	I1210 00:20:21.664590   92058 start.go:293] postStartSetup for "newest-cni-677937" (driver="kvm2")
	I1210 00:20:21.664602   92058 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:20:21.664653   92058 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:20:21.664995   92058 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:20:21.665021   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:20:21.667682   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:21.668049   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:21.668087   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:21.668175   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:20:21.668379   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:20:21.668535   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:20:21.668692   92058 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa Username:docker}
	I1210 00:20:21.750249   92058 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:20:21.755084   92058 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:20:21.755119   92058 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1210 00:20:21.755201   92058 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1210 00:20:21.755305   92058 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1210 00:20:21.755428   92058 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:20:21.766067   92058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1210 00:20:21.789249   92058 start.go:296] duration metric: took 124.646305ms for postStartSetup
	I1210 00:20:21.789291   92058 fix.go:56] duration metric: took 19.133550206s for fixHost
	I1210 00:20:21.789310   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:20:21.791931   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:21.792219   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:21.792251   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:21.792360   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:20:21.792578   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:20:21.792704   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:20:21.792842   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:20:21.792993   92058 main.go:141] libmachine: Using SSH client type: native
	I1210 00:20:21.793149   92058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I1210 00:20:21.793160   92058 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:20:21.903947   92058 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733790021.875806665
	
	I1210 00:20:21.903975   92058 fix.go:216] guest clock: 1733790021.875806665
	I1210 00:20:21.903986   92058 fix.go:229] Guest: 2024-12-10 00:20:21.875806665 +0000 UTC Remote: 2024-12-10 00:20:21.789294725 +0000 UTC m=+19.285555127 (delta=86.51194ms)
	I1210 00:20:21.904009   92058 fix.go:200] guest clock delta is within tolerance: 86.51194ms
	I1210 00:20:21.904025   92058 start.go:83] releasing machines lock for "newest-cni-677937", held for 19.248300287s
	I1210 00:20:21.904046   92058 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:20:21.904339   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetIP
	I1210 00:20:21.906883   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:21.907202   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:21.907236   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:21.907449   92058 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:20:21.908003   92058 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:20:21.908225   92058 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:20:21.908326   92058 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:20:21.908389   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:20:21.908449   92058 ssh_runner.go:195] Run: cat /version.json
	I1210 00:20:21.908476   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:20:21.911119   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:21.911184   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:21.911527   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:21.911557   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:21.911606   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:21.911625   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:21.911748   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:20:21.911875   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:20:21.911966   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:20:21.912068   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:20:21.912089   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:20:21.912266   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:20:21.912272   92058 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa Username:docker}
	I1210 00:20:21.912383   92058 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa Username:docker}
	I1210 00:20:22.008054   92058 ssh_runner.go:195] Run: systemctl --version
	I1210 00:20:22.013666   92058 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:20:22.156465   92058 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:20:22.162590   92058 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:20:22.162645   92058 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:20:22.177793   92058 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 00:20:22.177818   92058 start.go:495] detecting cgroup driver to use...
	I1210 00:20:22.177871   92058 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:20:22.193329   92058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:20:22.207661   92058 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:20:22.207718   92058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:20:22.221221   92058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:20:22.234436   92058 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:20:22.342391   92058 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:20:22.475709   92058 docker.go:233] disabling docker service ...
	I1210 00:20:22.475781   92058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:20:22.490720   92058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:20:22.503714   92058 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:20:22.648031   92058 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:20:22.768414   92058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:20:22.784310   92058 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:20:22.802414   92058 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 00:20:22.802488   92058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:20:22.812883   92058 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:20:22.812968   92058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:20:22.823666   92058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:20:22.834157   92058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:20:22.844464   92058 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:20:22.854707   92058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:20:22.865026   92058 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:20:22.881840   92058 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:20:22.892460   92058 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:20:22.901751   92058 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 00:20:22.901812   92058 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 00:20:22.913760   92058 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:20:22.922952   92058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:20:23.038392   92058 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:20:23.125936   92058 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:20:23.126026   92058 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:20:23.130484   92058 start.go:563] Will wait 60s for crictl version
	I1210 00:20:23.130539   92058 ssh_runner.go:195] Run: which crictl
	I1210 00:20:23.134188   92058 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:20:23.170276   92058 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:20:23.170366   92058 ssh_runner.go:195] Run: crio --version
	I1210 00:20:23.197464   92058 ssh_runner.go:195] Run: crio --version
	I1210 00:20:23.225838   92058 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:20:23.227078   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetIP
	I1210 00:20:23.229938   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:23.230266   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:23.230294   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:23.230497   92058 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 00:20:23.234577   92058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:20:23.248469   92058 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 00:20:23.249763   92058 kubeadm.go:883] updating cluster {Name:newest-cni-677937 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:newest-cni-677937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 00:20:23.249880   92058 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:20:23.249945   92058 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:20:23.284228   92058 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 00:20:23.284303   92058 ssh_runner.go:195] Run: which lz4
	I1210 00:20:23.288293   92058 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 00:20:23.292632   92058 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 00:20:23.292708   92058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 00:20:24.524047   92058 crio.go:462] duration metric: took 1.235790971s to copy over tarball
	I1210 00:20:24.524133   92058 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 00:20:26.617073   92058 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.092901165s)
	I1210 00:20:26.617100   92058 crio.go:469] duration metric: took 2.093018349s to extract the tarball
	I1210 00:20:26.617107   92058 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 00:20:26.652994   92058 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:20:26.694358   92058 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:20:26.694389   92058 cache_images.go:84] Images are preloaded, skipping loading
	I1210 00:20:26.694399   92058 kubeadm.go:934] updating node { 192.168.39.239 8443 v1.31.2 crio true true} ...
	I1210 00:20:26.694538   92058 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-677937 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-677937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:20:26.694630   92058 ssh_runner.go:195] Run: crio config
	I1210 00:20:26.734441   92058 cni.go:84] Creating CNI manager for ""
	I1210 00:20:26.734465   92058 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:20:26.734479   92058 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1210 00:20:26.734516   92058 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.239 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-677937 NodeName:newest-cni-677937 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.39.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:20:26.734684   92058 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-677937"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.239"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.239"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:20:26.734765   92058 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:20:26.744573   92058 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:20:26.744636   92058 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 00:20:26.753728   92058 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1210 00:20:26.769440   92058 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:20:26.785005   92058 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2487 bytes)
	I1210 00:20:26.801481   92058 ssh_runner.go:195] Run: grep 192.168.39.239	control-plane.minikube.internal$ /etc/hosts
	I1210 00:20:26.805144   92058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.239	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:20:26.817405   92058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:20:26.939539   92058 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:20:26.955655   92058 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937 for IP: 192.168.39.239
	I1210 00:20:26.955678   92058 certs.go:194] generating shared ca certs ...
	I1210 00:20:26.955693   92058 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:20:26.955874   92058 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1210 00:20:26.955931   92058 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1210 00:20:26.955946   92058 certs.go:256] generating profile certs ...
	I1210 00:20:26.956064   92058 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/client.key
	I1210 00:20:26.956153   92058 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/apiserver.key.ad66389d
	I1210 00:20:26.956216   92058 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/proxy-client.key
	I1210 00:20:26.956430   92058 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1210 00:20:26.956476   92058 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1210 00:20:26.956492   92058 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:20:26.956531   92058 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1210 00:20:26.956568   92058 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:20:26.956607   92058 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1210 00:20:26.956672   92058 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1210 00:20:26.957355   92058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:20:26.988321   92058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 00:20:27.014895   92058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:20:27.052212   92058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:20:27.098779   92058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 00:20:27.125883   92058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 00:20:27.150749   92058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:20:27.176638   92058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 00:20:27.199439   92058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1210 00:20:27.222076   92058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:20:27.244397   92058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1210 00:20:27.266250   92058 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:20:27.281935   92058 ssh_runner.go:195] Run: openssl version
	I1210 00:20:27.287368   92058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1210 00:20:27.297797   92058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1210 00:20:27.302229   92058 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1210 00:20:27.302275   92058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1210 00:20:27.307816   92058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:20:27.318009   92058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:20:27.328107   92058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:20:27.332543   92058 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:20:27.332591   92058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:20:27.337937   92058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:20:27.348137   92058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1210 00:20:27.358072   92058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1210 00:20:27.362383   92058 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1210 00:20:27.362438   92058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1210 00:20:27.367814   92058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1210 00:20:27.378998   92058 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:20:27.383435   92058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 00:20:27.389027   92058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 00:20:27.394409   92058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 00:20:27.400144   92058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 00:20:27.405716   92058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 00:20:27.411192   92058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 00:20:27.416658   92058 kubeadm.go:392] StartCluster: {Name:newest-cni-677937 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:newest-cni-677937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0
s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:20:27.416764   92058 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:20:27.416807   92058 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:20:27.453931   92058 cri.go:89] found id: ""
	I1210 00:20:27.454005   92058 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 00:20:27.464512   92058 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 00:20:27.464547   92058 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 00:20:27.464601   92058 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 00:20:27.474343   92058 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 00:20:27.474870   92058 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-677937" does not appear in /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1210 00:20:27.475118   92058 kubeconfig.go:62] /home/jenkins/minikube-integration/19888-18950/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-677937" cluster setting kubeconfig missing "newest-cni-677937" context setting]
	I1210 00:20:27.475606   92058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:20:27.476772   92058 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 00:20:27.486219   92058 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.239
	I1210 00:20:27.486271   92058 kubeadm.go:1160] stopping kube-system containers ...
	I1210 00:20:27.486288   92058 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 00:20:27.486346   92058 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:20:27.518610   92058 cri.go:89] found id: ""
	I1210 00:20:27.518685   92058 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 00:20:27.534636   92058 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:20:27.543915   92058 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:20:27.543940   92058 kubeadm.go:157] found existing configuration files:
	
	I1210 00:20:27.543997   92058 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:20:27.552840   92058 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:20:27.552910   92058 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:20:27.561847   92058 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:20:27.570276   92058 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:20:27.570342   92058 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:20:27.578902   92058 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:20:27.587679   92058 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:20:27.587726   92058 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:20:27.596531   92058 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:20:27.605236   92058 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:20:27.605319   92058 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:20:27.614699   92058 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:20:27.623927   92058 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:20:27.722762   92058 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:20:28.593167   92058 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:20:28.815270   92058 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:20:28.881211   92058 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:20:28.970434   92058 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:20:28.970541   92058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:20:29.471606   92058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:20:29.971445   92058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:20:30.470893   92058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:20:30.971362   92058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:20:31.004330   92058 api_server.go:72] duration metric: took 2.033895755s to wait for apiserver process to appear ...
	I1210 00:20:31.004358   92058 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:20:31.004376   92058 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I1210 00:20:31.004903   92058 api_server.go:269] stopped: https://192.168.39.239:8443/healthz: Get "https://192.168.39.239:8443/healthz": dial tcp 192.168.39.239:8443: connect: connection refused
	I1210 00:20:31.504741   92058 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I1210 00:20:33.683508   92058 api_server.go:279] https://192.168.39.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 00:20:33.683539   92058 api_server.go:103] status: https://192.168.39.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 00:20:33.683555   92058 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I1210 00:20:33.781325   92058 api_server.go:279] https://192.168.39.239:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 00:20:33.781425   92058 api_server.go:103] status: https://192.168.39.239:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 00:20:34.004500   92058 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I1210 00:20:34.010587   92058 api_server.go:279] https://192.168.39.239:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 00:20:34.010613   92058 api_server.go:103] status: https://192.168.39.239:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 00:20:34.504761   92058 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I1210 00:20:34.511552   92058 api_server.go:279] https://192.168.39.239:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 00:20:34.511596   92058 api_server.go:103] status: https://192.168.39.239:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 00:20:35.005156   92058 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I1210 00:20:35.009363   92058 api_server.go:279] https://192.168.39.239:8443/healthz returned 200:
	ok
	I1210 00:20:35.015281   92058 api_server.go:141] control plane version: v1.31.2
	I1210 00:20:35.015303   92058 api_server.go:131] duration metric: took 4.010939638s to wait for apiserver health ...
	I1210 00:20:35.015312   92058 cni.go:84] Creating CNI manager for ""
	I1210 00:20:35.015319   92058 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:20:35.017395   92058 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:20:35.018733   92058 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:20:35.032430   92058 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:20:35.050005   92058 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:20:35.058422   92058 system_pods.go:59] 8 kube-system pods found
	I1210 00:20:35.058459   92058 system_pods.go:61] "coredns-7c65d6cfc9-npft9" [eb96fb57-3d5f-43d6-8a1a-8e4535b50c0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 00:20:35.058474   92058 system_pods.go:61] "etcd-newest-cni-677937" [a903b60a-73f3-4b18-ad5a-e0d3bbec547f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 00:20:35.058485   92058 system_pods.go:61] "kube-apiserver-newest-cni-677937" [005658fb-51cb-4b9f-ad38-a499ce8a6978] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 00:20:35.058493   92058 system_pods.go:61] "kube-controller-manager-newest-cni-677937" [638f1ba9-6afa-485b-bbc8-f57310572be1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 00:20:35.058501   92058 system_pods.go:61] "kube-proxy-hqfx2" [cd0206ac-0332-43f4-8506-7ce295b8baf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 00:20:35.058513   92058 system_pods.go:61] "kube-scheduler-newest-cni-677937" [67a52ae6-49b2-438d-9df7-f97ef0266216] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 00:20:35.058523   92058 system_pods.go:61] "metrics-server-6867b74b74-nsrh8" [79fdd4a7-e0c6-4d37-becc-6e0aad898177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:20:35.058534   92058 system_pods.go:61] "storage-provisioner" [19d7c825-e7cb-4bd6-bea4-e838828bbdf8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 00:20:35.058542   92058 system_pods.go:74] duration metric: took 8.519061ms to wait for pod list to return data ...
	I1210 00:20:35.058554   92058 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:20:35.062031   92058 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:20:35.062050   92058 node_conditions.go:123] node cpu capacity is 2
	I1210 00:20:35.062060   92058 node_conditions.go:105] duration metric: took 3.501475ms to run NodePressure ...
	I1210 00:20:35.062075   92058 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:20:35.341054   92058 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:20:35.352357   92058 ops.go:34] apiserver oom_adj: -16
	I1210 00:20:35.352383   92058 kubeadm.go:597] duration metric: took 7.887828639s to restartPrimaryControlPlane
	I1210 00:20:35.352396   92058 kubeadm.go:394] duration metric: took 7.935744867s to StartCluster
	I1210 00:20:35.352417   92058 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:20:35.352514   92058 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1210 00:20:35.353394   92058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:20:35.353641   92058 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:20:35.353754   92058 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:20:35.353854   92058 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-677937"
	I1210 00:20:35.353870   92058 config.go:182] Loaded profile config "newest-cni-677937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:20:35.353885   92058 addons.go:69] Setting default-storageclass=true in profile "newest-cni-677937"
	I1210 00:20:35.353900   92058 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-677937"
	I1210 00:20:35.353875   92058 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-677937"
	W1210 00:20:35.353932   92058 addons.go:243] addon storage-provisioner should already be in state true
	I1210 00:20:35.353963   92058 host.go:66] Checking if "newest-cni-677937" exists ...
	I1210 00:20:35.353880   92058 addons.go:69] Setting dashboard=true in profile "newest-cni-677937"
	I1210 00:20:35.354024   92058 addons.go:234] Setting addon dashboard=true in "newest-cni-677937"
	W1210 00:20:35.354036   92058 addons.go:243] addon dashboard should already be in state true
	I1210 00:20:35.354072   92058 host.go:66] Checking if "newest-cni-677937" exists ...
	I1210 00:20:35.353908   92058 addons.go:69] Setting metrics-server=true in profile "newest-cni-677937"
	I1210 00:20:35.354122   92058 addons.go:234] Setting addon metrics-server=true in "newest-cni-677937"
	W1210 00:20:35.354140   92058 addons.go:243] addon metrics-server should already be in state true
	I1210 00:20:35.354169   92058 host.go:66] Checking if "newest-cni-677937" exists ...
	I1210 00:20:35.354335   92058 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:20:35.354371   92058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:20:35.354416   92058 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:20:35.354448   92058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:20:35.354530   92058 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:20:35.354545   92058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:20:35.354712   92058 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:20:35.354773   92058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:20:35.355439   92058 out.go:177] * Verifying Kubernetes components...
	I1210 00:20:35.357199   92058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:20:35.370469   92058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38649
	I1210 00:20:35.370925   92058 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:20:35.370948   92058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41027
	I1210 00:20:35.371028   92058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44689
	I1210 00:20:35.371445   92058 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:20:35.371546   92058 main.go:141] libmachine: Using API Version  1
	I1210 00:20:35.371583   92058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:20:35.371620   92058 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:20:35.371929   92058 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:20:35.372014   92058 main.go:141] libmachine: Using API Version  1
	I1210 00:20:35.372038   92058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:20:35.372078   92058 main.go:141] libmachine: Using API Version  1
	I1210 00:20:35.372096   92058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:20:35.372388   92058 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:20:35.372459   92058 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:20:35.372500   92058 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:20:35.372541   92058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:20:35.372970   92058 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:20:35.372982   92058 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:20:35.373009   92058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:20:35.373057   92058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:20:35.375013   92058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42077
	I1210 00:20:35.380963   92058 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:20:35.381559   92058 main.go:141] libmachine: Using API Version  1
	I1210 00:20:35.381619   92058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:20:35.382013   92058 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:20:35.382238   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetState
	I1210 00:20:35.385427   92058 addons.go:234] Setting addon default-storageclass=true in "newest-cni-677937"
	W1210 00:20:35.385480   92058 addons.go:243] addon default-storageclass should already be in state true
	I1210 00:20:35.385520   92058 host.go:66] Checking if "newest-cni-677937" exists ...
	I1210 00:20:35.385896   92058 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:20:35.385961   92058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:20:35.390241   92058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32925
	I1210 00:20:35.390630   92058 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:20:35.391223   92058 main.go:141] libmachine: Using API Version  1
	I1210 00:20:35.391247   92058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:20:35.391525   92058 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:20:35.391940   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetState
	I1210 00:20:35.392819   92058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35441
	I1210 00:20:35.393228   92058 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:20:35.393424   92058 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:20:35.394047   92058 main.go:141] libmachine: Using API Version  1
	I1210 00:20:35.394074   92058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:20:35.394569   92058 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:20:35.394822   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetState
	I1210 00:20:35.395694   92058 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 00:20:35.396750   92058 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:20:35.396995   92058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46573
	I1210 00:20:35.397494   92058 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:20:35.397970   92058 main.go:141] libmachine: Using API Version  1
	I1210 00:20:35.397992   92058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:20:35.398323   92058 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:20:35.398499   92058 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 00:20:35.398564   92058 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1210 00:20:35.399173   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetState
	I1210 00:20:35.399860   92058 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 00:20:35.399879   92058 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 00:20:35.399908   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:20:35.400025   92058 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 00:20:35.400038   92058 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 00:20:35.400049   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:20:35.400999   92058 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:20:35.402767   92058 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:20:35.403634   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:35.403930   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:35.404258   92058 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:20:35.404277   92058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:20:35.404294   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:20:35.404772   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:35.404799   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:35.404882   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:35.404911   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:35.404930   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:20:35.405085   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:20:35.405117   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:20:35.405257   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:20:35.405299   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:20:35.405400   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:20:35.405510   92058 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa Username:docker}
	I1210 00:20:35.405555   92058 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa Username:docker}
	I1210 00:20:35.407154   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:35.407531   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:35.407617   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:35.407703   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:20:35.407878   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:20:35.407993   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:20:35.408119   92058 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa Username:docker}
	I1210 00:20:35.410122   92058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35589
	I1210 00:20:35.436178   92058 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:20:35.436697   92058 main.go:141] libmachine: Using API Version  1
	I1210 00:20:35.436728   92058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:20:35.437125   92058 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:20:35.437599   92058 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:20:35.437639   92058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:20:35.453780   92058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41383
	I1210 00:20:35.454190   92058 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:20:35.454710   92058 main.go:141] libmachine: Using API Version  1
	I1210 00:20:35.454738   92058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:20:35.455108   92058 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:20:35.455349   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetState
	I1210 00:20:35.457029   92058 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:20:35.457349   92058 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:20:35.457372   92058 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:20:35.457392   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHHostname
	I1210 00:20:35.460596   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:35.460960   92058 main.go:141] libmachine: (newest-cni-677937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ab:8b", ip: ""} in network mk-newest-cni-677937: {Iface:virbr1 ExpiryTime:2024-12-10 01:20:13 +0000 UTC Type:0 Mac:52:54:00:d7:ab:8b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:newest-cni-677937 Clientid:01:52:54:00:d7:ab:8b}
	I1210 00:20:35.460993   92058 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined IP address 192.168.39.239 and MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:20:35.461175   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHPort
	I1210 00:20:35.461373   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHKeyPath
	I1210 00:20:35.461558   92058 main.go:141] libmachine: (newest-cni-677937) Calling .GetSSHUsername
	I1210 00:20:35.461714   92058 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa Username:docker}
	I1210 00:20:35.534704   92058 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:20:35.551026   92058 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:20:35.551102   92058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:20:35.564410   92058 api_server.go:72] duration metric: took 210.733467ms to wait for apiserver process to appear ...
	I1210 00:20:35.564439   92058 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:20:35.564467   92058 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I1210 00:20:35.569946   92058 api_server.go:279] https://192.168.39.239:8443/healthz returned 200:
	ok
	I1210 00:20:35.571433   92058 api_server.go:141] control plane version: v1.31.2
	I1210 00:20:35.571453   92058 api_server.go:131] duration metric: took 7.006613ms to wait for apiserver health ...
	I1210 00:20:35.571464   92058 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:20:35.577766   92058 system_pods.go:59] 8 kube-system pods found
	I1210 00:20:35.577793   92058 system_pods.go:61] "coredns-7c65d6cfc9-npft9" [eb96fb57-3d5f-43d6-8a1a-8e4535b50c0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 00:20:35.577800   92058 system_pods.go:61] "etcd-newest-cni-677937" [a903b60a-73f3-4b18-ad5a-e0d3bbec547f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 00:20:35.577808   92058 system_pods.go:61] "kube-apiserver-newest-cni-677937" [005658fb-51cb-4b9f-ad38-a499ce8a6978] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 00:20:35.577815   92058 system_pods.go:61] "kube-controller-manager-newest-cni-677937" [638f1ba9-6afa-485b-bbc8-f57310572be1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 00:20:35.577821   92058 system_pods.go:61] "kube-proxy-hqfx2" [cd0206ac-0332-43f4-8506-7ce295b8baf9] Running
	I1210 00:20:35.577831   92058 system_pods.go:61] "kube-scheduler-newest-cni-677937" [67a52ae6-49b2-438d-9df7-f97ef0266216] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 00:20:35.577843   92058 system_pods.go:61] "metrics-server-6867b74b74-nsrh8" [79fdd4a7-e0c6-4d37-becc-6e0aad898177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:20:35.577849   92058 system_pods.go:61] "storage-provisioner" [19d7c825-e7cb-4bd6-bea4-e838828bbdf8] Running
	I1210 00:20:35.577861   92058 system_pods.go:74] duration metric: took 6.38886ms to wait for pod list to return data ...
	I1210 00:20:35.577872   92058 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:20:35.580498   92058 default_sa.go:45] found service account: "default"
	I1210 00:20:35.580526   92058 default_sa.go:55] duration metric: took 2.647632ms for default service account to be created ...
	I1210 00:20:35.580540   92058 kubeadm.go:582] duration metric: took 226.867419ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 00:20:35.580555   92058 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:20:35.582813   92058 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:20:35.582832   92058 node_conditions.go:123] node cpu capacity is 2
	I1210 00:20:35.582844   92058 node_conditions.go:105] duration metric: took 2.283618ms to run NodePressure ...
	I1210 00:20:35.582858   92058 start.go:241] waiting for startup goroutines ...
	I1210 00:20:35.619238   92058 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 00:20:35.619261   92058 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 00:20:35.628023   92058 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 00:20:35.628046   92058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 00:20:35.644127   92058 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 00:20:35.644154   92058 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 00:20:35.649306   92058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:20:35.667032   92058 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 00:20:35.667062   92058 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 00:20:35.683024   92058 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 00:20:35.683052   92058 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 00:20:35.728364   92058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:20:35.732562   92058 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:20:35.732596   92058 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 00:20:35.758446   92058 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 00:20:35.758468   92058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 00:20:35.814502   92058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:20:35.849914   92058 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 00:20:35.849947   92058 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 00:20:35.912097   92058 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 00:20:35.912123   92058 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 00:20:36.030260   92058 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 00:20:36.030295   92058 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 00:20:36.127052   92058 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 00:20:36.127078   92058 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 00:20:36.204478   92058 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 00:20:36.204507   92058 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 00:20:36.283312   92058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 00:20:37.480449   92058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.752043699s)
	I1210 00:20:37.480528   92058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.665987724s)
	I1210 00:20:37.480554   92058 main.go:141] libmachine: Making call to close driver server
	I1210 00:20:37.480566   92058 main.go:141] libmachine: (newest-cni-677937) Calling .Close
	I1210 00:20:37.480598   92058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.831252454s)
	I1210 00:20:37.480554   92058 main.go:141] libmachine: Making call to close driver server
	I1210 00:20:37.480639   92058 main.go:141] libmachine: Making call to close driver server
	I1210 00:20:37.480679   92058 main.go:141] libmachine: (newest-cni-677937) Calling .Close
	I1210 00:20:37.480752   92058 main.go:141] libmachine: (newest-cni-677937) Calling .Close
	I1210 00:20:37.480883   92058 main.go:141] libmachine: (newest-cni-677937) DBG | Closing plugin on server side
	I1210 00:20:37.480984   92058 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:20:37.480998   92058 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:20:37.481008   92058 main.go:141] libmachine: Making call to close driver server
	I1210 00:20:37.481016   92058 main.go:141] libmachine: (newest-cni-677937) Calling .Close
	I1210 00:20:37.481016   92058 main.go:141] libmachine: (newest-cni-677937) DBG | Closing plugin on server side
	I1210 00:20:37.481041   92058 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:20:37.481047   92058 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:20:37.481054   92058 main.go:141] libmachine: Making call to close driver server
	I1210 00:20:37.481061   92058 main.go:141] libmachine: (newest-cni-677937) Calling .Close
	I1210 00:20:37.481160   92058 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:20:37.481183   92058 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:20:37.481215   92058 main.go:141] libmachine: Making call to close driver server
	I1210 00:20:37.481244   92058 main.go:141] libmachine: (newest-cni-677937) Calling .Close
	I1210 00:20:37.481434   92058 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:20:37.481452   92058 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:20:37.481462   92058 addons.go:475] Verifying addon metrics-server=true in "newest-cni-677937"
	I1210 00:20:37.481324   92058 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:20:37.481486   92058 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:20:37.481351   92058 main.go:141] libmachine: (newest-cni-677937) DBG | Closing plugin on server side
	I1210 00:20:37.481640   92058 main.go:141] libmachine: (newest-cni-677937) DBG | Closing plugin on server side
	I1210 00:20:37.481657   92058 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:20:37.481664   92058 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:20:37.489634   92058 main.go:141] libmachine: Making call to close driver server
	I1210 00:20:37.489652   92058 main.go:141] libmachine: (newest-cni-677937) Calling .Close
	I1210 00:20:37.489929   92058 main.go:141] libmachine: (newest-cni-677937) DBG | Closing plugin on server side
	I1210 00:20:37.489968   92058 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:20:37.489979   92058 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:20:38.047144   92058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.76378034s)
	I1210 00:20:38.047207   92058 main.go:141] libmachine: Making call to close driver server
	I1210 00:20:38.047220   92058 main.go:141] libmachine: (newest-cni-677937) Calling .Close
	I1210 00:20:38.047540   92058 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:20:38.047557   92058 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:20:38.047577   92058 main.go:141] libmachine: Making call to close driver server
	I1210 00:20:38.047585   92058 main.go:141] libmachine: (newest-cni-677937) Calling .Close
	I1210 00:20:38.047845   92058 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:20:38.047865   92058 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:20:38.049349   92058 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-677937 addons enable metrics-server
	
	I1210 00:20:38.050730   92058 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I1210 00:20:38.051854   92058 addons.go:510] duration metric: took 2.698101671s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I1210 00:20:38.051896   92058 start.go:246] waiting for cluster config update ...
	I1210 00:20:38.051912   92058 start.go:255] writing updated cluster config ...
	I1210 00:20:38.052211   92058 ssh_runner.go:195] Run: rm -f paused
	I1210 00:20:38.102472   92058 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:20:38.104322   92058 out.go:177] * Done! kubectl is now configured to use "newest-cni-677937" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.755523369Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733790039755498615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e20b2e14-57e9-4c5d-aad1-c148cffd3ee6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.756234362Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b89ebf57-3b16-4a7d-aa7a-42d01dcd95b6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.756317082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b89ebf57-3b16-4a7d-aa7a-42d01dcd95b6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.756515703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f2560bf3f170fe1fc85a677b8ae7d26b493f2d145d16e3c41ffd744e1717773,PodSandboxId:2d35ff9b0b4e7e5a52a64316c62f56748c42ea5ec7190c2e1138f5849f9fc685,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789065436280195,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7xpcc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cff7e56-1785-41c8-bd9c-db9d3f0bd05f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4985f7331836a5ae10fb2fb99513b4cc87b70875d16351a7466c968396c6968c,PodSandboxId:23d078bdf6810b0c8c165535a6e1cdb5a2cbdba7ddbb34aead976639142e494d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789065278412892,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pj85d,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d1b9b056-f4a3-419c-86fa-a94d88464f74,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6399bf0bce56bda89140073b8d361b4afe760a3a5d0c89f5dd4af1cab20ab37a,PodSandboxId:19d9c622baf2311bd5365878c281a9e7c300a3af42bf12aace93141df867f1cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789065177051539,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z2n25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b81952-728
1-4705-9536-06eb939a5807,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f384fa1da72a45dcdbc35cfd2f4fa3ab98712f0cc2d80629ce52fbb0c443f0cc,PodSandboxId:69aed00250100553182ed7483ccc5c286d23454a6ece9eeeb11eabd393e4d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1
733789064871716208,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea716edd-4030-4ec3-b094-c3a50154b473,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcfbdf5799fe9759c9be24bb1a4680940136ba17437b3687fad21f9dd1978f6,PodSandboxId:52b456845966d34aa556f24e7298067eed0081c5216a6a40c37c26e6b4c851a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173378905
2529970142,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985a02d3bfa184443d7fb95235dee937,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e017b3720454255e4a31ba485c6f73a540a911ce08804b8bd412b6fa0ae669a,PodSandboxId:71b1ea323f87218f11fffd4c2ab04fd63d61946c4f7035501747a6e55252b314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,Cre
atedAt:1733789052545410929,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e24f0391c006e0575694a0e26b27d9e,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de46d5ff86dd5461c31f73b51f91a26504e1bc4d47ae016ad46487c82e4a8a62,PodSandboxId:0b23ebb38c2037752681775a18b8dde5f4bea1da7d2f4c7c28e3bda1282d9306,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Creat
edAt:1733789052542786778,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170d4ffdb0ab37daa7cc398387a6b976,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d1e8debce6d6d89cfaa6bd5a97b1694d5bc200ec750caa964110e95ef3820e,PodSandboxId:d595c29971d8383af6de543deb71112c2d2c01c8ff562e900f221de0be5f9331,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789
052497110116,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 779d4b3f1d29fa0566dfa9ae56e9ccf9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a6da443a267e57163cf919f914e2fce3e9ee54cf7ae17e914e1febf275171b,PodSandboxId:907ec5e7689c40ad35f5260d8ca5846b1f8315104ff491a5a7423506fab033e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733788763997142612,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e24f0391c006e0575694a0e26b27d9e,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b89ebf57-3b16-4a7d-aa7a-42d01dcd95b6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.792276958Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00dcf06b-dfa2-4dcb-9fc0-6fb5f68cc25c name=/runtime.v1.RuntimeService/Version
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.792361104Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00dcf06b-dfa2-4dcb-9fc0-6fb5f68cc25c name=/runtime.v1.RuntimeService/Version
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.793492764Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c48f055-7c31-4dc5-aea4-7e7d10b620b4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.793918198Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733790039793897021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c48f055-7c31-4dc5-aea4-7e7d10b620b4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.794456200Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c39260c9-fdfc-4a58-b7d0-593ab2c19964 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.794524700Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c39260c9-fdfc-4a58-b7d0-593ab2c19964 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.794742378Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f2560bf3f170fe1fc85a677b8ae7d26b493f2d145d16e3c41ffd744e1717773,PodSandboxId:2d35ff9b0b4e7e5a52a64316c62f56748c42ea5ec7190c2e1138f5849f9fc685,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789065436280195,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7xpcc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cff7e56-1785-41c8-bd9c-db9d3f0bd05f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4985f7331836a5ae10fb2fb99513b4cc87b70875d16351a7466c968396c6968c,PodSandboxId:23d078bdf6810b0c8c165535a6e1cdb5a2cbdba7ddbb34aead976639142e494d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789065278412892,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pj85d,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d1b9b056-f4a3-419c-86fa-a94d88464f74,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6399bf0bce56bda89140073b8d361b4afe760a3a5d0c89f5dd4af1cab20ab37a,PodSandboxId:19d9c622baf2311bd5365878c281a9e7c300a3af42bf12aace93141df867f1cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789065177051539,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z2n25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b81952-728
1-4705-9536-06eb939a5807,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f384fa1da72a45dcdbc35cfd2f4fa3ab98712f0cc2d80629ce52fbb0c443f0cc,PodSandboxId:69aed00250100553182ed7483ccc5c286d23454a6ece9eeeb11eabd393e4d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1
733789064871716208,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea716edd-4030-4ec3-b094-c3a50154b473,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcfbdf5799fe9759c9be24bb1a4680940136ba17437b3687fad21f9dd1978f6,PodSandboxId:52b456845966d34aa556f24e7298067eed0081c5216a6a40c37c26e6b4c851a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173378905
2529970142,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985a02d3bfa184443d7fb95235dee937,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e017b3720454255e4a31ba485c6f73a540a911ce08804b8bd412b6fa0ae669a,PodSandboxId:71b1ea323f87218f11fffd4c2ab04fd63d61946c4f7035501747a6e55252b314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,Cre
atedAt:1733789052545410929,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e24f0391c006e0575694a0e26b27d9e,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de46d5ff86dd5461c31f73b51f91a26504e1bc4d47ae016ad46487c82e4a8a62,PodSandboxId:0b23ebb38c2037752681775a18b8dde5f4bea1da7d2f4c7c28e3bda1282d9306,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Creat
edAt:1733789052542786778,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170d4ffdb0ab37daa7cc398387a6b976,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d1e8debce6d6d89cfaa6bd5a97b1694d5bc200ec750caa964110e95ef3820e,PodSandboxId:d595c29971d8383af6de543deb71112c2d2c01c8ff562e900f221de0be5f9331,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789
052497110116,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 779d4b3f1d29fa0566dfa9ae56e9ccf9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a6da443a267e57163cf919f914e2fce3e9ee54cf7ae17e914e1febf275171b,PodSandboxId:907ec5e7689c40ad35f5260d8ca5846b1f8315104ff491a5a7423506fab033e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733788763997142612,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e24f0391c006e0575694a0e26b27d9e,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c39260c9-fdfc-4a58-b7d0-593ab2c19964 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.842914458Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=661a50ac-69b8-492c-833d-56f13e455111 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.843038605Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=661a50ac-69b8-492c-833d-56f13e455111 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.844514291Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae31344d-d0fe-470c-82bf-53d1a46214aa name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.845130318Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733790039845103371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae31344d-d0fe-470c-82bf-53d1a46214aa name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.845912625Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23a772b5-4165-408e-b0a8-5c5ff05a373b name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.846018182Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23a772b5-4165-408e-b0a8-5c5ff05a373b name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.846342991Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f2560bf3f170fe1fc85a677b8ae7d26b493f2d145d16e3c41ffd744e1717773,PodSandboxId:2d35ff9b0b4e7e5a52a64316c62f56748c42ea5ec7190c2e1138f5849f9fc685,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789065436280195,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7xpcc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cff7e56-1785-41c8-bd9c-db9d3f0bd05f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4985f7331836a5ae10fb2fb99513b4cc87b70875d16351a7466c968396c6968c,PodSandboxId:23d078bdf6810b0c8c165535a6e1cdb5a2cbdba7ddbb34aead976639142e494d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789065278412892,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pj85d,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d1b9b056-f4a3-419c-86fa-a94d88464f74,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6399bf0bce56bda89140073b8d361b4afe760a3a5d0c89f5dd4af1cab20ab37a,PodSandboxId:19d9c622baf2311bd5365878c281a9e7c300a3af42bf12aace93141df867f1cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789065177051539,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z2n25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b81952-728
1-4705-9536-06eb939a5807,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f384fa1da72a45dcdbc35cfd2f4fa3ab98712f0cc2d80629ce52fbb0c443f0cc,PodSandboxId:69aed00250100553182ed7483ccc5c286d23454a6ece9eeeb11eabd393e4d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1
733789064871716208,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea716edd-4030-4ec3-b094-c3a50154b473,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcfbdf5799fe9759c9be24bb1a4680940136ba17437b3687fad21f9dd1978f6,PodSandboxId:52b456845966d34aa556f24e7298067eed0081c5216a6a40c37c26e6b4c851a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173378905
2529970142,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985a02d3bfa184443d7fb95235dee937,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e017b3720454255e4a31ba485c6f73a540a911ce08804b8bd412b6fa0ae669a,PodSandboxId:71b1ea323f87218f11fffd4c2ab04fd63d61946c4f7035501747a6e55252b314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,Cre
atedAt:1733789052545410929,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e24f0391c006e0575694a0e26b27d9e,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de46d5ff86dd5461c31f73b51f91a26504e1bc4d47ae016ad46487c82e4a8a62,PodSandboxId:0b23ebb38c2037752681775a18b8dde5f4bea1da7d2f4c7c28e3bda1282d9306,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Creat
edAt:1733789052542786778,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170d4ffdb0ab37daa7cc398387a6b976,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d1e8debce6d6d89cfaa6bd5a97b1694d5bc200ec750caa964110e95ef3820e,PodSandboxId:d595c29971d8383af6de543deb71112c2d2c01c8ff562e900f221de0be5f9331,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789
052497110116,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 779d4b3f1d29fa0566dfa9ae56e9ccf9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a6da443a267e57163cf919f914e2fce3e9ee54cf7ae17e914e1febf275171b,PodSandboxId:907ec5e7689c40ad35f5260d8ca5846b1f8315104ff491a5a7423506fab033e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733788763997142612,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e24f0391c006e0575694a0e26b27d9e,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23a772b5-4165-408e-b0a8-5c5ff05a373b name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.883678125Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92e608f7-bc33-476e-9f0a-ddaba4ec20a8 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.883755595Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92e608f7-bc33-476e-9f0a-ddaba4ec20a8 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.884722868Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d45cf09-9cf7-4a7e-b298-33b75668ab95 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.885123958Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733790039885101474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d45cf09-9cf7-4a7e-b298-33b75668ab95 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.885766792Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f597904-2756-4b26-8879-79bd67280e1f name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.885841943Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f597904-2756-4b26-8879-79bd67280e1f name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:20:39 default-k8s-diff-port-871210 crio[723]: time="2024-12-10 00:20:39.886112437Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f2560bf3f170fe1fc85a677b8ae7d26b493f2d145d16e3c41ffd744e1717773,PodSandboxId:2d35ff9b0b4e7e5a52a64316c62f56748c42ea5ec7190c2e1138f5849f9fc685,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789065436280195,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7xpcc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cff7e56-1785-41c8-bd9c-db9d3f0bd05f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4985f7331836a5ae10fb2fb99513b4cc87b70875d16351a7466c968396c6968c,PodSandboxId:23d078bdf6810b0c8c165535a6e1cdb5a2cbdba7ddbb34aead976639142e494d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789065278412892,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pj85d,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d1b9b056-f4a3-419c-86fa-a94d88464f74,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6399bf0bce56bda89140073b8d361b4afe760a3a5d0c89f5dd4af1cab20ab37a,PodSandboxId:19d9c622baf2311bd5365878c281a9e7c300a3af42bf12aace93141df867f1cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789065177051539,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z2n25,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b81952-728
1-4705-9536-06eb939a5807,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f384fa1da72a45dcdbc35cfd2f4fa3ab98712f0cc2d80629ce52fbb0c443f0cc,PodSandboxId:69aed00250100553182ed7483ccc5c286d23454a6ece9eeeb11eabd393e4d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1
733789064871716208,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea716edd-4030-4ec3-b094-c3a50154b473,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffcfbdf5799fe9759c9be24bb1a4680940136ba17437b3687fad21f9dd1978f6,PodSandboxId:52b456845966d34aa556f24e7298067eed0081c5216a6a40c37c26e6b4c851a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173378905
2529970142,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985a02d3bfa184443d7fb95235dee937,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e017b3720454255e4a31ba485c6f73a540a911ce08804b8bd412b6fa0ae669a,PodSandboxId:71b1ea323f87218f11fffd4c2ab04fd63d61946c4f7035501747a6e55252b314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,Cre
atedAt:1733789052545410929,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e24f0391c006e0575694a0e26b27d9e,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de46d5ff86dd5461c31f73b51f91a26504e1bc4d47ae016ad46487c82e4a8a62,PodSandboxId:0b23ebb38c2037752681775a18b8dde5f4bea1da7d2f4c7c28e3bda1282d9306,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Creat
edAt:1733789052542786778,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170d4ffdb0ab37daa7cc398387a6b976,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d1e8debce6d6d89cfaa6bd5a97b1694d5bc200ec750caa964110e95ef3820e,PodSandboxId:d595c29971d8383af6de543deb71112c2d2c01c8ff562e900f221de0be5f9331,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789
052497110116,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 779d4b3f1d29fa0566dfa9ae56e9ccf9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a6da443a267e57163cf919f914e2fce3e9ee54cf7ae17e914e1febf275171b,PodSandboxId:907ec5e7689c40ad35f5260d8ca5846b1f8315104ff491a5a7423506fab033e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733788763997142612,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-871210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e24f0391c006e0575694a0e26b27d9e,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f597904-2756-4b26-8879-79bd67280e1f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2f2560bf3f170       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   2d35ff9b0b4e7       coredns-7c65d6cfc9-7xpcc
	4985f7331836a       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   16 minutes ago      Running             kube-proxy                0                   23d078bdf6810       kube-proxy-pj85d
	6399bf0bce56b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   19d9c622baf23       coredns-7c65d6cfc9-z2n25
	f384fa1da72a4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   69aed00250100       storage-provisioner
	5e017b3720454       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   16 minutes ago      Running             kube-apiserver            2                   71b1ea323f872       kube-apiserver-default-k8s-diff-port-871210
	de46d5ff86dd5       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   16 minutes ago      Running             kube-scheduler            2                   0b23ebb38c203       kube-scheduler-default-k8s-diff-port-871210
	ffcfbdf5799fe       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   16 minutes ago      Running             kube-controller-manager   2                   52b456845966d       kube-controller-manager-default-k8s-diff-port-871210
	36d1e8debce6d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   d595c29971d83       etcd-default-k8s-diff-port-871210
	35a6da443a267       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   21 minutes ago      Exited              kube-apiserver            1                   907ec5e7689c4       kube-apiserver-default-k8s-diff-port-871210
	
	
	==> coredns [2f2560bf3f170fe1fc85a677b8ae7d26b493f2d145d16e3c41ffd744e1717773] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [6399bf0bce56bda89140073b8d361b4afe760a3a5d0c89f5dd4af1cab20ab37a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-871210
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-871210
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=default-k8s-diff-port-871210
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_10T00_04_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:04:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-871210
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:20:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:19:45 +0000   Tue, 10 Dec 2024 00:04:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:19:45 +0000   Tue, 10 Dec 2024 00:04:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:19:45 +0000   Tue, 10 Dec 2024 00:04:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:19:45 +0000   Tue, 10 Dec 2024 00:04:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.54
	  Hostname:    default-k8s-diff-port-871210
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f85c6d29444243079d72aa1918e9bb64
	  System UUID:                f85c6d29-4442-4307-9d72-aa1918e9bb64
	  Boot ID:                    917a833d-0235-453d-a23c-4ce687ec67e0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-7xpcc                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-z2n25                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-default-k8s-diff-port-871210                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-871210             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-871210    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-pj85d                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-871210             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-7g2qm                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node default-k8s-diff-port-871210 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node default-k8s-diff-port-871210 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node default-k8s-diff-port-871210 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node default-k8s-diff-port-871210 event: Registered Node default-k8s-diff-port-871210 in Controller
	
	
	==> dmesg <==
	[  +0.037298] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec 9 23:59] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.048240] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.609863] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.698062] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.062342] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072249] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.221911] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.164973] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.310278] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +4.184737] systemd-fstab-generator[805]: Ignoring "noauto" option for root device
	[  +0.063522] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.067786] systemd-fstab-generator[923]: Ignoring "noauto" option for root device
	[  +5.570506] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.303795] kauditd_printk_skb: 54 callbacks suppressed
	[ +23.589811] kauditd_printk_skb: 31 callbacks suppressed
	[Dec10 00:04] systemd-fstab-generator[2628]: Ignoring "noauto" option for root device
	[  +0.064939] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.985788] systemd-fstab-generator[2947]: Ignoring "noauto" option for root device
	[  +0.078498] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.264207] systemd-fstab-generator[3075]: Ignoring "noauto" option for root device
	[  +0.111509] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.362547] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [36d1e8debce6d6d89cfaa6bd5a97b1694d5bc200ec750caa964110e95ef3820e] <==
	{"level":"info","ts":"2024-12-10T00:04:13.146774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f41dc21f7a6c607 became leader at term 2"}
	{"level":"info","ts":"2024-12-10T00:04:13.146782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5f41dc21f7a6c607 elected leader 5f41dc21f7a6c607 at term 2"}
	{"level":"info","ts":"2024-12-10T00:04:13.150791Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"5f41dc21f7a6c607","local-member-attributes":"{Name:default-k8s-diff-port-871210 ClientURLs:[https://192.168.72.54:2379]}","request-path":"/0/members/5f41dc21f7a6c607/attributes","cluster-id":"770d524238a76c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-10T00:04:13.150928Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T00:04:13.151505Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:04:13.153610Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T00:04:13.158148Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T00:04:13.159618Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-10T00:04:13.159651Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-10T00:04:13.160027Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-10T00:04:13.160430Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"770d524238a76c54","local-member-id":"5f41dc21f7a6c607","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:04:13.175015Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:04:13.175103Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:04:13.182688Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T00:04:13.183404Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.54:2379"}
	{"level":"info","ts":"2024-12-10T00:14:13.775661Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":688}
	{"level":"info","ts":"2024-12-10T00:14:13.787093Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":688,"took":"11.152249ms","hash":3535470683,"current-db-size-bytes":2138112,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2138112,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-12-10T00:14:13.787168Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3535470683,"revision":688,"compact-revision":-1}
	{"level":"info","ts":"2024-12-10T00:19:13.785067Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":930}
	{"level":"info","ts":"2024-12-10T00:19:13.788479Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":930,"took":"3.090438ms","hash":3808119689,"current-db-size-bytes":2138112,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1511424,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-12-10T00:19:13.788630Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3808119689,"revision":930,"compact-revision":688}
	{"level":"info","ts":"2024-12-10T00:19:35.690209Z","caller":"traceutil/trace.go:171","msg":"trace[1055083429] transaction","detail":"{read_only:false; response_revision:1193; number_of_response:1; }","duration":"146.891379ms","start":"2024-12-10T00:19:35.543302Z","end":"2024-12-10T00:19:35.690193Z","steps":["trace[1055083429] 'process raft request'  (duration: 146.780421ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-10T00:19:35.942089Z","caller":"traceutil/trace.go:171","msg":"trace[1558724089] transaction","detail":"{read_only:false; response_revision:1194; number_of_response:1; }","duration":"143.593803ms","start":"2024-12-10T00:19:35.798481Z","end":"2024-12-10T00:19:35.942074Z","steps":["trace[1558724089] 'process raft request'  (duration: 77.555745ms)","trace[1558724089] 'compare'  (duration: 65.972005ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-10T00:19:37.057479Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.126199ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-10T00:19:37.057630Z","caller":"traceutil/trace.go:171","msg":"trace[1544196766] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1195; }","duration":"106.347594ms","start":"2024-12-10T00:19:36.951267Z","end":"2024-12-10T00:19:37.057614Z","steps":["trace[1544196766] 'range keys from in-memory index tree'  (duration: 106.109693ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:20:40 up 21 min,  0 users,  load average: 0.20, 0.15, 0.10
	Linux default-k8s-diff-port-871210 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [35a6da443a267e57163cf919f914e2fce3e9ee54cf7ae17e914e1febf275171b] <==
	W1210 00:04:08.974464       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.099079       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.114683       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.171180       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.211662       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.273471       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.284524       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.319280       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.388108       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.492509       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.540249       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.565097       1 logging.go:55] [core] [Channel #13 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.591436       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.592735       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.643331       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.690831       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.800514       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.853394       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.864933       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.891525       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:09.892853       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:10.132229       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:10.146953       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:10.150446       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:10.186095       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [5e017b3720454255e4a31ba485c6f73a540a911ce08804b8bd412b6fa0ae669a] <==
	 > logger="UnhandledError"
	I1210 00:17:16.172226       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 00:19:15.168677       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:19:15.168800       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1210 00:19:16.170441       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:19:16.170517       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1210 00:19:16.170557       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:19:16.170684       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 00:19:16.171833       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 00:19:16.171933       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 00:20:16.172319       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:20:16.172369       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1210 00:20:16.172411       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:20:16.172437       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 00:20:16.173513       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 00:20:16.173590       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ffcfbdf5799fe9759c9be24bb1a4680940136ba17437b3687fad21f9dd1978f6] <==
	I1210 00:15:22.654109       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 00:15:31.721718       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="89.648µs"
	I1210 00:15:46.715909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="59.498µs"
	E1210 00:15:52.201316       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:15:52.660638       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:16:22.206868       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:16:22.668296       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:16:52.214771       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:16:52.675953       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:17:22.222045       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:17:22.683123       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:17:52.227369       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:17:52.691039       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:18:22.232919       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:18:22.698535       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:18:52.240404       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:18:52.706350       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:19:22.246483       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:19:22.713964       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 00:19:45.392158       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-871210"
	E1210 00:19:52.251797       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:19:52.722718       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:20:22.257517       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:20:22.730377       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 00:20:34.717508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="127.129µs"
	
	
	==> kube-proxy [4985f7331836a5ae10fb2fb99513b4cc87b70875d16351a7466c968396c6968c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1210 00:04:25.570215       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1210 00:04:25.585515       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.54"]
	E1210 00:04:25.585709       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 00:04:25.616681       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1210 00:04:25.616788       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 00:04:25.616859       1 server_linux.go:169] "Using iptables Proxier"
	I1210 00:04:25.619853       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 00:04:25.620693       1 server.go:483] "Version info" version="v1.31.2"
	I1210 00:04:25.620742       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 00:04:25.622281       1 config.go:199] "Starting service config controller"
	I1210 00:04:25.622348       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1210 00:04:25.622389       1 config.go:105] "Starting endpoint slice config controller"
	I1210 00:04:25.622405       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1210 00:04:25.622937       1 config.go:328] "Starting node config controller"
	I1210 00:04:25.624467       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1210 00:04:25.722801       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1210 00:04:25.722860       1 shared_informer.go:320] Caches are synced for service config
	I1210 00:04:25.724636       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [de46d5ff86dd5461c31f73b51f91a26504e1bc4d47ae016ad46487c82e4a8a62] <==
	W1210 00:04:15.208028       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1210 00:04:15.213788       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1210 00:04:16.018431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1210 00:04:16.018468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:04:16.049597       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 00:04:16.049651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 00:04:16.065250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1210 00:04:16.065387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:04:16.136519       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1210 00:04:16.136779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:04:16.167347       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1210 00:04:16.167538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 00:04:16.169733       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1210 00:04:16.169821       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 00:04:16.249896       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1210 00:04:16.250000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:04:16.381496       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1210 00:04:16.381546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:04:16.455587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1210 00:04:16.455697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 00:04:16.458273       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1210 00:04:16.458373       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1210 00:04:16.469931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1210 00:04:16.469989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1210 00:04:19.565482       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 00:19:40 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:19:40.701744    2954 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7g2qm" podUID="49ac129a-c85d-4af1-b3b2-06bc10bced77"
	Dec 10 00:19:47 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:19:47.949581    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789987949113203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:19:47 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:19:47.950038    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789987949113203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:19:55 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:19:55.704768    2954 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7g2qm" podUID="49ac129a-c85d-4af1-b3b2-06bc10bced77"
	Dec 10 00:19:57 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:19:57.951868    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789997951417650,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:19:57 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:19:57.952149    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789997951417650,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:20:07 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:20:07.702594    2954 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7g2qm" podUID="49ac129a-c85d-4af1-b3b2-06bc10bced77"
	Dec 10 00:20:07 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:20:07.954201    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733790007953809620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:20:07 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:20:07.954278    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733790007953809620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:20:17 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:20:17.717122    2954 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 10 00:20:17 default-k8s-diff-port-871210 kubelet[2954]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 10 00:20:17 default-k8s-diff-port-871210 kubelet[2954]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 10 00:20:17 default-k8s-diff-port-871210 kubelet[2954]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 00:20:17 default-k8s-diff-port-871210 kubelet[2954]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 00:20:17 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:20:17.955916    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733790017955425843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:20:17 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:20:17.955939    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733790017955425843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:20:19 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:20:19.712137    2954 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 10 00:20:19 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:20:19.712199    2954 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 10 00:20:19 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:20:19.712315    2954 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmstg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-7g2qm_kube-system(49ac129a-c85d-4af1-b3b2-06bc10bced77): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Dec 10 00:20:19 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:20:19.713659    2954 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-7g2qm" podUID="49ac129a-c85d-4af1-b3b2-06bc10bced77"
	Dec 10 00:20:27 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:20:27.957757    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733790027957373161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:20:27 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:20:27.958100    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733790027957373161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:20:34 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:20:34.701399    2954 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7g2qm" podUID="49ac129a-c85d-4af1-b3b2-06bc10bced77"
	Dec 10 00:20:37 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:20:37.960711    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733790037960062932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:20:37 default-k8s-diff-port-871210 kubelet[2954]: E1210 00:20:37.961083    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733790037960062932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [f384fa1da72a45dcdbc35cfd2f4fa3ab98712f0cc2d80629ce52fbb0c443f0cc] <==
	I1210 00:04:24.953013       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 00:04:24.979757       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 00:04:24.979880       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 00:04:25.076540       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 00:04:25.114170       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-871210_9fdfc0dc-d5c6-480c-a56d-52cdebcd82e0!
	I1210 00:04:25.116809       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"85baa87c-b15b-4bc6-84f8-e3b16b53ecdd", APIVersion:"v1", ResourceVersion:"385", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-871210_9fdfc0dc-d5c6-480c-a56d-52cdebcd82e0 became leader
	I1210 00:04:25.215297       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-871210_9fdfc0dc-d5c6-480c-a56d-52cdebcd82e0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-871210 -n default-k8s-diff-port-871210
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-871210 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-7g2qm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-871210 describe pod metrics-server-6867b74b74-7g2qm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-871210 describe pod metrics-server-6867b74b74-7g2qm: exit status 1 (67.915854ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-7g2qm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-871210 describe pod metrics-server-6867b74b74-7g2qm: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (420.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (285.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-048296 -n no-preload-048296
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-10 00:19:06.645707194 +0000 UTC m=+6431.077322301
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-048296 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-048296 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.284µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-048296 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-048296 -n no-preload-048296
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-048296 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-048296 logs -n 25: (1.169406074s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo find                             | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo crio                             | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-030585                                       | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-866797 | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | disable-driver-mounts-866797                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:52 UTC |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-048296             | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC | 09 Dec 24 23:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-825613            | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC | 09 Dec 24 23:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-825613                                  | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-871210  | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:52 UTC | 09 Dec 24 23:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:52 UTC |                     |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-720064        | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-048296                  | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-825613                 | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC | 10 Dec 24 00:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-825613                                  | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC | 10 Dec 24 00:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-871210       | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:54 UTC | 10 Dec 24 00:04 UTC |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-720064             | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 10 Dec 24 00:19 UTC | 10 Dec 24 00:19 UTC |
	| start   | -p newest-cni-677937 --memory=2200 --alsologtostderr   | newest-cni-677937            | jenkins | v1.34.0 | 10 Dec 24 00:19 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 00:19:01
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 00:19:01.882939   91113 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:19:01.883055   91113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:19:01.883064   91113 out.go:358] Setting ErrFile to fd 2...
	I1210 00:19:01.883068   91113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:19:01.883276   91113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1210 00:19:01.883962   91113 out.go:352] Setting JSON to false
	I1210 00:19:01.884928   91113 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10893,"bootTime":1733779049,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:19:01.885001   91113 start.go:139] virtualization: kvm guest
	I1210 00:19:01.887267   91113 out.go:177] * [newest-cni-677937] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:19:01.888770   91113 notify.go:220] Checking for updates...
	I1210 00:19:01.888801   91113 out.go:177]   - MINIKUBE_LOCATION=19888
	I1210 00:19:01.890277   91113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:19:01.891689   91113 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1210 00:19:01.892958   91113 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1210 00:19:01.894326   91113 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:19:01.896462   91113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:19:01.898308   91113 config.go:182] Loaded profile config "default-k8s-diff-port-871210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:19:01.898417   91113 config.go:182] Loaded profile config "embed-certs-825613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:19:01.898520   91113 config.go:182] Loaded profile config "no-preload-048296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:19:01.898618   91113 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:19:01.936993   91113 out.go:177] * Using the kvm2 driver based on user configuration
	I1210 00:19:01.938336   91113 start.go:297] selected driver: kvm2
	I1210 00:19:01.938369   91113 start.go:901] validating driver "kvm2" against <nil>
	I1210 00:19:01.938389   91113 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:19:01.939192   91113 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:19:01.939297   91113 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 00:19:01.956466   91113 install.go:137] /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1210 00:19:01.956527   91113 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1210 00:19:01.956583   91113 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 00:19:01.956788   91113 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 00:19:01.956816   91113 cni.go:84] Creating CNI manager for ""
	I1210 00:19:01.956866   91113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:19:01.956875   91113 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 00:19:01.956919   91113 start.go:340] cluster config:
	{Name:newest-cni-677937 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-677937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:19:01.957008   91113 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:19:01.959007   91113 out.go:177] * Starting "newest-cni-677937" primary control-plane node in "newest-cni-677937" cluster
	I1210 00:19:01.960226   91113 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:19:01.960284   91113 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 00:19:01.960296   91113 cache.go:56] Caching tarball of preloaded images
	I1210 00:19:01.960385   91113 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:19:01.960400   91113 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:19:01.960522   91113 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/config.json ...
	I1210 00:19:01.960548   91113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/newest-cni-677937/config.json: {Name:mk9582fa5fc235c2ab303bc9997f26d8ee39b655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:19:01.960737   91113 start.go:360] acquireMachinesLock for newest-cni-677937: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:19:01.960811   91113 start.go:364] duration metric: took 49.981µs to acquireMachinesLock for "newest-cni-677937"
	I1210 00:19:01.960836   91113 start.go:93] Provisioning new machine with config: &{Name:newest-cni-677937 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:newest-cni-677937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:19:01.960928   91113 start.go:125] createHost starting for "" (driver="kvm2")
	I1210 00:19:01.964296   91113 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1210 00:19:01.964454   91113 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:19:01.964489   91113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:19:01.979514   91113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32901
	I1210 00:19:01.980002   91113 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:19:01.980532   91113 main.go:141] libmachine: Using API Version  1
	I1210 00:19:01.980553   91113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:19:01.980910   91113 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:19:01.981130   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetMachineName
	I1210 00:19:01.981250   91113 main.go:141] libmachine: (newest-cni-677937) Calling .DriverName
	I1210 00:19:01.981363   91113 start.go:159] libmachine.API.Create for "newest-cni-677937" (driver="kvm2")
	I1210 00:19:01.981396   91113 client.go:168] LocalClient.Create starting
	I1210 00:19:01.981433   91113 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem
	I1210 00:19:01.981468   91113 main.go:141] libmachine: Decoding PEM data...
	I1210 00:19:01.981482   91113 main.go:141] libmachine: Parsing certificate...
	I1210 00:19:01.981526   91113 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem
	I1210 00:19:01.981545   91113 main.go:141] libmachine: Decoding PEM data...
	I1210 00:19:01.981556   91113 main.go:141] libmachine: Parsing certificate...
	I1210 00:19:01.981569   91113 main.go:141] libmachine: Running pre-create checks...
	I1210 00:19:01.981578   91113 main.go:141] libmachine: (newest-cni-677937) Calling .PreCreateCheck
	I1210 00:19:01.981932   91113 main.go:141] libmachine: (newest-cni-677937) Calling .GetConfigRaw
	I1210 00:19:01.982274   91113 main.go:141] libmachine: Creating machine...
	I1210 00:19:01.982282   91113 main.go:141] libmachine: (newest-cni-677937) Calling .Create
	I1210 00:19:01.982406   91113 main.go:141] libmachine: (newest-cni-677937) Creating KVM machine...
	I1210 00:19:01.983714   91113 main.go:141] libmachine: (newest-cni-677937) DBG | found existing default KVM network
	I1210 00:19:01.985179   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:01.985028   91136 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002700e0}
	I1210 00:19:01.985205   91113 main.go:141] libmachine: (newest-cni-677937) DBG | created network xml: 
	I1210 00:19:01.985220   91113 main.go:141] libmachine: (newest-cni-677937) DBG | <network>
	I1210 00:19:01.985233   91113 main.go:141] libmachine: (newest-cni-677937) DBG |   <name>mk-newest-cni-677937</name>
	I1210 00:19:01.985248   91113 main.go:141] libmachine: (newest-cni-677937) DBG |   <dns enable='no'/>
	I1210 00:19:01.985255   91113 main.go:141] libmachine: (newest-cni-677937) DBG |   
	I1210 00:19:01.985266   91113 main.go:141] libmachine: (newest-cni-677937) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1210 00:19:01.985272   91113 main.go:141] libmachine: (newest-cni-677937) DBG |     <dhcp>
	I1210 00:19:01.985293   91113 main.go:141] libmachine: (newest-cni-677937) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1210 00:19:01.985301   91113 main.go:141] libmachine: (newest-cni-677937) DBG |     </dhcp>
	I1210 00:19:01.985309   91113 main.go:141] libmachine: (newest-cni-677937) DBG |   </ip>
	I1210 00:19:01.985312   91113 main.go:141] libmachine: (newest-cni-677937) DBG |   
	I1210 00:19:01.985345   91113 main.go:141] libmachine: (newest-cni-677937) DBG | </network>
	I1210 00:19:01.985364   91113 main.go:141] libmachine: (newest-cni-677937) DBG | 
	I1210 00:19:01.990916   91113 main.go:141] libmachine: (newest-cni-677937) DBG | trying to create private KVM network mk-newest-cni-677937 192.168.39.0/24...
	I1210 00:19:02.063925   91113 main.go:141] libmachine: (newest-cni-677937) DBG | private KVM network mk-newest-cni-677937 192.168.39.0/24 created
	I1210 00:19:02.063959   91113 main.go:141] libmachine: (newest-cni-677937) Setting up store path in /home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937 ...
	I1210 00:19:02.063977   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:02.063886   91136 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1210 00:19:02.063997   91113 main.go:141] libmachine: (newest-cni-677937) Building disk image from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1210 00:19:02.064016   91113 main.go:141] libmachine: (newest-cni-677937) Downloading /home/jenkins/minikube-integration/19888-18950/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1210 00:19:02.318526   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:02.318385   91136 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/id_rsa...
	I1210 00:19:02.512744   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:02.512579   91136 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/newest-cni-677937.rawdisk...
	I1210 00:19:02.512782   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Writing magic tar header
	I1210 00:19:02.512805   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Writing SSH key tar header
	I1210 00:19:02.512818   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:02.512772   91136 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937 ...
	I1210 00:19:02.512923   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937
	I1210 00:19:02.512962   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube/machines
	I1210 00:19:02.512992   91113 main.go:141] libmachine: (newest-cni-677937) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937 (perms=drwx------)
	I1210 00:19:02.513007   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950/.minikube
	I1210 00:19:02.513021   91113 main.go:141] libmachine: (newest-cni-677937) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube/machines (perms=drwxr-xr-x)
	I1210 00:19:02.513037   91113 main.go:141] libmachine: (newest-cni-677937) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950/.minikube (perms=drwxr-xr-x)
	I1210 00:19:02.513051   91113 main.go:141] libmachine: (newest-cni-677937) Setting executable bit set on /home/jenkins/minikube-integration/19888-18950 (perms=drwxrwxr-x)
	I1210 00:19:02.513064   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19888-18950
	I1210 00:19:02.513082   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1210 00:19:02.513094   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Checking permissions on dir: /home/jenkins
	I1210 00:19:02.513105   91113 main.go:141] libmachine: (newest-cni-677937) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 00:19:02.513129   91113 main.go:141] libmachine: (newest-cni-677937) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 00:19:02.513141   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Checking permissions on dir: /home
	I1210 00:19:02.513153   91113 main.go:141] libmachine: (newest-cni-677937) DBG | Skipping /home - not owner
	I1210 00:19:02.513166   91113 main.go:141] libmachine: (newest-cni-677937) Creating domain...
	I1210 00:19:02.514270   91113 main.go:141] libmachine: (newest-cni-677937) define libvirt domain using xml: 
	I1210 00:19:02.514307   91113 main.go:141] libmachine: (newest-cni-677937) <domain type='kvm'>
	I1210 00:19:02.514318   91113 main.go:141] libmachine: (newest-cni-677937)   <name>newest-cni-677937</name>
	I1210 00:19:02.514334   91113 main.go:141] libmachine: (newest-cni-677937)   <memory unit='MiB'>2200</memory>
	I1210 00:19:02.514342   91113 main.go:141] libmachine: (newest-cni-677937)   <vcpu>2</vcpu>
	I1210 00:19:02.514356   91113 main.go:141] libmachine: (newest-cni-677937)   <features>
	I1210 00:19:02.514364   91113 main.go:141] libmachine: (newest-cni-677937)     <acpi/>
	I1210 00:19:02.514372   91113 main.go:141] libmachine: (newest-cni-677937)     <apic/>
	I1210 00:19:02.514381   91113 main.go:141] libmachine: (newest-cni-677937)     <pae/>
	I1210 00:19:02.514392   91113 main.go:141] libmachine: (newest-cni-677937)     
	I1210 00:19:02.514403   91113 main.go:141] libmachine: (newest-cni-677937)   </features>
	I1210 00:19:02.514410   91113 main.go:141] libmachine: (newest-cni-677937)   <cpu mode='host-passthrough'>
	I1210 00:19:02.514423   91113 main.go:141] libmachine: (newest-cni-677937)   
	I1210 00:19:02.514433   91113 main.go:141] libmachine: (newest-cni-677937)   </cpu>
	I1210 00:19:02.514442   91113 main.go:141] libmachine: (newest-cni-677937)   <os>
	I1210 00:19:02.514457   91113 main.go:141] libmachine: (newest-cni-677937)     <type>hvm</type>
	I1210 00:19:02.514484   91113 main.go:141] libmachine: (newest-cni-677937)     <boot dev='cdrom'/>
	I1210 00:19:02.514504   91113 main.go:141] libmachine: (newest-cni-677937)     <boot dev='hd'/>
	I1210 00:19:02.514542   91113 main.go:141] libmachine: (newest-cni-677937)     <bootmenu enable='no'/>
	I1210 00:19:02.514563   91113 main.go:141] libmachine: (newest-cni-677937)   </os>
	I1210 00:19:02.514575   91113 main.go:141] libmachine: (newest-cni-677937)   <devices>
	I1210 00:19:02.514588   91113 main.go:141] libmachine: (newest-cni-677937)     <disk type='file' device='cdrom'>
	I1210 00:19:02.514622   91113 main.go:141] libmachine: (newest-cni-677937)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/boot2docker.iso'/>
	I1210 00:19:02.514633   91113 main.go:141] libmachine: (newest-cni-677937)       <target dev='hdc' bus='scsi'/>
	I1210 00:19:02.514643   91113 main.go:141] libmachine: (newest-cni-677937)       <readonly/>
	I1210 00:19:02.514654   91113 main.go:141] libmachine: (newest-cni-677937)     </disk>
	I1210 00:19:02.514669   91113 main.go:141] libmachine: (newest-cni-677937)     <disk type='file' device='disk'>
	I1210 00:19:02.514686   91113 main.go:141] libmachine: (newest-cni-677937)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1210 00:19:02.514703   91113 main.go:141] libmachine: (newest-cni-677937)       <source file='/home/jenkins/minikube-integration/19888-18950/.minikube/machines/newest-cni-677937/newest-cni-677937.rawdisk'/>
	I1210 00:19:02.514713   91113 main.go:141] libmachine: (newest-cni-677937)       <target dev='hda' bus='virtio'/>
	I1210 00:19:02.514727   91113 main.go:141] libmachine: (newest-cni-677937)     </disk>
	I1210 00:19:02.514737   91113 main.go:141] libmachine: (newest-cni-677937)     <interface type='network'>
	I1210 00:19:02.514750   91113 main.go:141] libmachine: (newest-cni-677937)       <source network='mk-newest-cni-677937'/>
	I1210 00:19:02.514759   91113 main.go:141] libmachine: (newest-cni-677937)       <model type='virtio'/>
	I1210 00:19:02.514775   91113 main.go:141] libmachine: (newest-cni-677937)     </interface>
	I1210 00:19:02.514799   91113 main.go:141] libmachine: (newest-cni-677937)     <interface type='network'>
	I1210 00:19:02.514813   91113 main.go:141] libmachine: (newest-cni-677937)       <source network='default'/>
	I1210 00:19:02.514825   91113 main.go:141] libmachine: (newest-cni-677937)       <model type='virtio'/>
	I1210 00:19:02.514835   91113 main.go:141] libmachine: (newest-cni-677937)     </interface>
	I1210 00:19:02.514846   91113 main.go:141] libmachine: (newest-cni-677937)     <serial type='pty'>
	I1210 00:19:02.514859   91113 main.go:141] libmachine: (newest-cni-677937)       <target port='0'/>
	I1210 00:19:02.514870   91113 main.go:141] libmachine: (newest-cni-677937)     </serial>
	I1210 00:19:02.514889   91113 main.go:141] libmachine: (newest-cni-677937)     <console type='pty'>
	I1210 00:19:02.514917   91113 main.go:141] libmachine: (newest-cni-677937)       <target type='serial' port='0'/>
	I1210 00:19:02.514929   91113 main.go:141] libmachine: (newest-cni-677937)     </console>
	I1210 00:19:02.514939   91113 main.go:141] libmachine: (newest-cni-677937)     <rng model='virtio'>
	I1210 00:19:02.514949   91113 main.go:141] libmachine: (newest-cni-677937)       <backend model='random'>/dev/random</backend>
	I1210 00:19:02.514958   91113 main.go:141] libmachine: (newest-cni-677937)     </rng>
	I1210 00:19:02.514977   91113 main.go:141] libmachine: (newest-cni-677937)     
	I1210 00:19:02.514996   91113 main.go:141] libmachine: (newest-cni-677937)     
	I1210 00:19:02.515008   91113 main.go:141] libmachine: (newest-cni-677937)   </devices>
	I1210 00:19:02.515018   91113 main.go:141] libmachine: (newest-cni-677937) </domain>
	I1210 00:19:02.515033   91113 main.go:141] libmachine: (newest-cni-677937) 
	I1210 00:19:02.519387   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:07:79:9b in network default
	I1210 00:19:02.520037   91113 main.go:141] libmachine: (newest-cni-677937) Ensuring networks are active...
	I1210 00:19:02.520063   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:02.520811   91113 main.go:141] libmachine: (newest-cni-677937) Ensuring network default is active
	I1210 00:19:02.521177   91113 main.go:141] libmachine: (newest-cni-677937) Ensuring network mk-newest-cni-677937 is active
	I1210 00:19:02.521788   91113 main.go:141] libmachine: (newest-cni-677937) Getting domain xml...
	I1210 00:19:02.522701   91113 main.go:141] libmachine: (newest-cni-677937) Creating domain...
	I1210 00:19:03.784070   91113 main.go:141] libmachine: (newest-cni-677937) Waiting to get IP...
	I1210 00:19:03.784739   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:03.785257   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:03.785283   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:03.785213   91136 retry.go:31] will retry after 271.868849ms: waiting for machine to come up
	I1210 00:19:04.058606   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:04.059115   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:04.059145   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:04.059069   91136 retry.go:31] will retry after 296.967378ms: waiting for machine to come up
	I1210 00:19:04.357546   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:04.358014   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:04.358050   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:04.357969   91136 retry.go:31] will retry after 318.242447ms: waiting for machine to come up
	I1210 00:19:04.677589   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:04.677991   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:04.678036   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:04.677958   91136 retry.go:31] will retry after 578.593134ms: waiting for machine to come up
	I1210 00:19:05.258479   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:05.258989   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:05.259017   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:05.258962   91136 retry.go:31] will retry after 698.184483ms: waiting for machine to come up
	I1210 00:19:05.958995   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:05.959620   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:05.959647   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:05.959554   91136 retry.go:31] will retry after 600.420589ms: waiting for machine to come up
	I1210 00:19:06.561175   91113 main.go:141] libmachine: (newest-cni-677937) DBG | domain newest-cni-677937 has defined MAC address 52:54:00:d7:ab:8b in network mk-newest-cni-677937
	I1210 00:19:06.561618   91113 main.go:141] libmachine: (newest-cni-677937) DBG | unable to find current IP address of domain newest-cni-677937 in network mk-newest-cni-677937
	I1210 00:19:06.561645   91113 main.go:141] libmachine: (newest-cni-677937) DBG | I1210 00:19:06.561575   91136 retry.go:31] will retry after 969.556201ms: waiting for machine to come up
	
	
	==> CRI-O <==
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.246771776Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789947246747316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c6e8145-ab66-4818-869c-40d4efe53719 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.247408548Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3bfa33cb-ac76-4144-ba6e-fe2fbee1da44 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.247475698Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3bfa33cb-ac76-4144-ba6e-fe2fbee1da44 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.247669639Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97d81c851470ff8904f2a704c67542ee6316104d04ab675437a01ea6f07fbfd8,PodSandboxId:3e63d3f10ab367d7f1654c75a37a1da60094faf8cd712564e70d0d5dba1e0e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733789112587816469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55fe311f-4610-4805-9fb7-3f1cac7c96e6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b237f65b1f52cc0946de1dff121714fb7635a1fbe982c80c699291423b82ccf8,PodSandboxId:37db34926854c5a7226a215f95806d6c5ad4d2b2363dc7f29cd5e4ee4c161c10,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789112185958500,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-56djc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e25bad9-b88a-4ac8-a180-968bf6b057a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a0e82982a44e43e0ed41ee3568d34c43378472c4ad70abc0278c8546b5eab1,PodSandboxId:b3eaf6f8899f7f7c5723661565f19184e409f490d8d44b49681ddbfa54f074e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789112059041810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8rxx7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f
dfd41b-8ef7-41af-a703-ebecfe9ad319,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b333a8bf49675208e97a25fca7ababa5c5e7aa1fa332e2d69b8abbca962a2c4,PodSandboxId:5c8d4180070cac533a1c5986663ae2a8f67bb62de994ac84b837d4580da01292,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733789111448208569,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qklxb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bb029a1-abf9-4825-b9ec-0520a78cb3d8,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c119307a718a68531ce047e24f34ab358273d68db90f69181c4d18122412639e,PodSandboxId:2f0a05c488178f0ab5a198f121cc114fa7792a6f4c7860cd619fc0c963929221,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789100581038645,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36b46162662e8ae135696fdf8945315,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9280dbddeda2f1b7a46a964534d86646310bb4e84b9820f666730c6b111b08f6,PodSandboxId:c72f6f0dc52dd73dcbd327ba4546c4ee3ad1aad42f23613604bc2f31fe087951,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:173378910061119
1956,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9acb3bdc1edc385116fe4109f78c762a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7012c4523526373176e1bdb3740fe4c3f0da98aefab113c29ec3fd8fc86dd,PodSandboxId:0604e23adb4603bc51d0c42824765a240119b427d9680120e1c727a002283b77,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789100574603874,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d3ccc5731b295ff6bc2d83c07ce51c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63e80d74c90d945f31cb7bab277568619312b598dfb63dd17020118f61ed233,PodSandboxId:32cf03c03dbf5e86519da793fb2cb443fe863efefb7f3e9afd145c7dbd3000fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789100520822298,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ef24a05a34425fdec92ff0a9fd5bbfa,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a425c2d931ef8748e90cd391f2121a62c9cbbaf1a686b2b9f5f811110fffbf58,PodSandboxId:7da95a45dfd0f1e9baa3c1bc2763cb0bd944cabac77d851c0432c361b0139d70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733788815385624290,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9acb3bdc1edc385116fe4109f78c762a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3bfa33cb-ac76-4144-ba6e-fe2fbee1da44 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.284625647Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a800cf90-7390-4d34-b911-34d5b4cf514f name=/runtime.v1.RuntimeService/Version
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.284717136Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a800cf90-7390-4d34-b911-34d5b4cf514f name=/runtime.v1.RuntimeService/Version
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.286237727Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74e7cd7c-d7d5-43e3-b494-13a89db409b6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.286636774Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789947286609814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74e7cd7c-d7d5-43e3-b494-13a89db409b6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.287297040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9eb5c3cd-efa5-4c33-bf0d-8f4c03998f08 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.287464686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9eb5c3cd-efa5-4c33-bf0d-8f4c03998f08 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.288142618Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97d81c851470ff8904f2a704c67542ee6316104d04ab675437a01ea6f07fbfd8,PodSandboxId:3e63d3f10ab367d7f1654c75a37a1da60094faf8cd712564e70d0d5dba1e0e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733789112587816469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55fe311f-4610-4805-9fb7-3f1cac7c96e6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b237f65b1f52cc0946de1dff121714fb7635a1fbe982c80c699291423b82ccf8,PodSandboxId:37db34926854c5a7226a215f95806d6c5ad4d2b2363dc7f29cd5e4ee4c161c10,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789112185958500,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-56djc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e25bad9-b88a-4ac8-a180-968bf6b057a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a0e82982a44e43e0ed41ee3568d34c43378472c4ad70abc0278c8546b5eab1,PodSandboxId:b3eaf6f8899f7f7c5723661565f19184e409f490d8d44b49681ddbfa54f074e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789112059041810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8rxx7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f
dfd41b-8ef7-41af-a703-ebecfe9ad319,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b333a8bf49675208e97a25fca7ababa5c5e7aa1fa332e2d69b8abbca962a2c4,PodSandboxId:5c8d4180070cac533a1c5986663ae2a8f67bb62de994ac84b837d4580da01292,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733789111448208569,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qklxb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bb029a1-abf9-4825-b9ec-0520a78cb3d8,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c119307a718a68531ce047e24f34ab358273d68db90f69181c4d18122412639e,PodSandboxId:2f0a05c488178f0ab5a198f121cc114fa7792a6f4c7860cd619fc0c963929221,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789100581038645,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36b46162662e8ae135696fdf8945315,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9280dbddeda2f1b7a46a964534d86646310bb4e84b9820f666730c6b111b08f6,PodSandboxId:c72f6f0dc52dd73dcbd327ba4546c4ee3ad1aad42f23613604bc2f31fe087951,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:173378910061119
1956,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9acb3bdc1edc385116fe4109f78c762a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7012c4523526373176e1bdb3740fe4c3f0da98aefab113c29ec3fd8fc86dd,PodSandboxId:0604e23adb4603bc51d0c42824765a240119b427d9680120e1c727a002283b77,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789100574603874,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d3ccc5731b295ff6bc2d83c07ce51c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63e80d74c90d945f31cb7bab277568619312b598dfb63dd17020118f61ed233,PodSandboxId:32cf03c03dbf5e86519da793fb2cb443fe863efefb7f3e9afd145c7dbd3000fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789100520822298,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ef24a05a34425fdec92ff0a9fd5bbfa,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a425c2d931ef8748e90cd391f2121a62c9cbbaf1a686b2b9f5f811110fffbf58,PodSandboxId:7da95a45dfd0f1e9baa3c1bc2763cb0bd944cabac77d851c0432c361b0139d70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733788815385624290,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9acb3bdc1edc385116fe4109f78c762a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9eb5c3cd-efa5-4c33-bf0d-8f4c03998f08 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.326220534Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fad6aa51-c147-4717-aeaf-af2a13e1543e name=/runtime.v1.RuntimeService/Version
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.326305571Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fad6aa51-c147-4717-aeaf-af2a13e1543e name=/runtime.v1.RuntimeService/Version
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.327651268Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=52dd9034-e585-4838-961e-767fc5390bb8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.328582255Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789947328547081,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52dd9034-e585-4838-961e-767fc5390bb8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.329194268Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=622216ba-b0d8-4141-9695-e849b3008012 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.329261305Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=622216ba-b0d8-4141-9695-e849b3008012 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.329511970Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97d81c851470ff8904f2a704c67542ee6316104d04ab675437a01ea6f07fbfd8,PodSandboxId:3e63d3f10ab367d7f1654c75a37a1da60094faf8cd712564e70d0d5dba1e0e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733789112587816469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55fe311f-4610-4805-9fb7-3f1cac7c96e6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b237f65b1f52cc0946de1dff121714fb7635a1fbe982c80c699291423b82ccf8,PodSandboxId:37db34926854c5a7226a215f95806d6c5ad4d2b2363dc7f29cd5e4ee4c161c10,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789112185958500,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-56djc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e25bad9-b88a-4ac8-a180-968bf6b057a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a0e82982a44e43e0ed41ee3568d34c43378472c4ad70abc0278c8546b5eab1,PodSandboxId:b3eaf6f8899f7f7c5723661565f19184e409f490d8d44b49681ddbfa54f074e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789112059041810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8rxx7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f
dfd41b-8ef7-41af-a703-ebecfe9ad319,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b333a8bf49675208e97a25fca7ababa5c5e7aa1fa332e2d69b8abbca962a2c4,PodSandboxId:5c8d4180070cac533a1c5986663ae2a8f67bb62de994ac84b837d4580da01292,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733789111448208569,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qklxb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bb029a1-abf9-4825-b9ec-0520a78cb3d8,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c119307a718a68531ce047e24f34ab358273d68db90f69181c4d18122412639e,PodSandboxId:2f0a05c488178f0ab5a198f121cc114fa7792a6f4c7860cd619fc0c963929221,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789100581038645,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36b46162662e8ae135696fdf8945315,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9280dbddeda2f1b7a46a964534d86646310bb4e84b9820f666730c6b111b08f6,PodSandboxId:c72f6f0dc52dd73dcbd327ba4546c4ee3ad1aad42f23613604bc2f31fe087951,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:173378910061119
1956,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9acb3bdc1edc385116fe4109f78c762a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7012c4523526373176e1bdb3740fe4c3f0da98aefab113c29ec3fd8fc86dd,PodSandboxId:0604e23adb4603bc51d0c42824765a240119b427d9680120e1c727a002283b77,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789100574603874,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d3ccc5731b295ff6bc2d83c07ce51c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63e80d74c90d945f31cb7bab277568619312b598dfb63dd17020118f61ed233,PodSandboxId:32cf03c03dbf5e86519da793fb2cb443fe863efefb7f3e9afd145c7dbd3000fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789100520822298,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ef24a05a34425fdec92ff0a9fd5bbfa,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a425c2d931ef8748e90cd391f2121a62c9cbbaf1a686b2b9f5f811110fffbf58,PodSandboxId:7da95a45dfd0f1e9baa3c1bc2763cb0bd944cabac77d851c0432c361b0139d70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733788815385624290,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9acb3bdc1edc385116fe4109f78c762a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=622216ba-b0d8-4141-9695-e849b3008012 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.360980212Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c59d3f0-59a7-4393-953c-22b02066475b name=/runtime.v1.RuntimeService/Version
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.361072494Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c59d3f0-59a7-4393-953c-22b02066475b name=/runtime.v1.RuntimeService/Version
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.363020741Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d7d8e14d-7f0d-431e-92f0-4824fe2bd9ec name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.363347918Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789947363326248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7d8e14d-7f0d-431e-92f0-4824fe2bd9ec name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.363911578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=692a282f-f48c-432d-a1f2-dcab321c14e9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.364012724Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=692a282f-f48c-432d-a1f2-dcab321c14e9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:19:07 no-preload-048296 crio[708]: time="2024-12-10 00:19:07.364372241Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97d81c851470ff8904f2a704c67542ee6316104d04ab675437a01ea6f07fbfd8,PodSandboxId:3e63d3f10ab367d7f1654c75a37a1da60094faf8cd712564e70d0d5dba1e0e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733789112587816469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55fe311f-4610-4805-9fb7-3f1cac7c96e6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b237f65b1f52cc0946de1dff121714fb7635a1fbe982c80c699291423b82ccf8,PodSandboxId:37db34926854c5a7226a215f95806d6c5ad4d2b2363dc7f29cd5e4ee4c161c10,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789112185958500,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-56djc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e25bad9-b88a-4ac8-a180-968bf6b057a2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a0e82982a44e43e0ed41ee3568d34c43378472c4ad70abc0278c8546b5eab1,PodSandboxId:b3eaf6f8899f7f7c5723661565f19184e409f490d8d44b49681ddbfa54f074e9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789112059041810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8rxx7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f
dfd41b-8ef7-41af-a703-ebecfe9ad319,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b333a8bf49675208e97a25fca7ababa5c5e7aa1fa332e2d69b8abbca962a2c4,PodSandboxId:5c8d4180070cac533a1c5986663ae2a8f67bb62de994ac84b837d4580da01292,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733789111448208569,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qklxb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bb029a1-abf9-4825-b9ec-0520a78cb3d8,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c119307a718a68531ce047e24f34ab358273d68db90f69181c4d18122412639e,PodSandboxId:2f0a05c488178f0ab5a198f121cc114fa7792a6f4c7860cd619fc0c963929221,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789100581038645,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36b46162662e8ae135696fdf8945315,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9280dbddeda2f1b7a46a964534d86646310bb4e84b9820f666730c6b111b08f6,PodSandboxId:c72f6f0dc52dd73dcbd327ba4546c4ee3ad1aad42f23613604bc2f31fe087951,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:173378910061119
1956,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9acb3bdc1edc385116fe4109f78c762a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7012c4523526373176e1bdb3740fe4c3f0da98aefab113c29ec3fd8fc86dd,PodSandboxId:0604e23adb4603bc51d0c42824765a240119b427d9680120e1c727a002283b77,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789100574603874,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d3ccc5731b295ff6bc2d83c07ce51c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63e80d74c90d945f31cb7bab277568619312b598dfb63dd17020118f61ed233,PodSandboxId:32cf03c03dbf5e86519da793fb2cb443fe863efefb7f3e9afd145c7dbd3000fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789100520822298,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ef24a05a34425fdec92ff0a9fd5bbfa,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a425c2d931ef8748e90cd391f2121a62c9cbbaf1a686b2b9f5f811110fffbf58,PodSandboxId:7da95a45dfd0f1e9baa3c1bc2763cb0bd944cabac77d851c0432c361b0139d70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733788815385624290,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-048296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9acb3bdc1edc385116fe4109f78c762a,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=692a282f-f48c-432d-a1f2-dcab321c14e9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	97d81c851470f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   3e63d3f10ab36       storage-provisioner
	b237f65b1f52c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   13 minutes ago      Running             coredns                   0                   37db34926854c       coredns-7c65d6cfc9-56djc
	94a0e82982a44       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   13 minutes ago      Running             coredns                   0                   b3eaf6f8899f7       coredns-7c65d6cfc9-8rxx7
	7b333a8bf4967       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   13 minutes ago      Running             kube-proxy                0                   5c8d4180070ca       kube-proxy-qklxb
	9280dbddeda2f       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Running             kube-apiserver            2                   c72f6f0dc52dd       kube-apiserver-no-preload-048296
	c119307a718a6       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   14 minutes ago      Running             kube-controller-manager   2                   2f0a05c488178       kube-controller-manager-no-preload-048296
	2ad7012c45235       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   0604e23adb460       etcd-no-preload-048296
	a63e80d74c90d       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   14 minutes ago      Running             kube-scheduler            2                   32cf03c03dbf5       kube-scheduler-no-preload-048296
	a425c2d931ef8       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   18 minutes ago      Exited              kube-apiserver            1                   7da95a45dfd0f       kube-apiserver-no-preload-048296
	
	
	==> coredns [94a0e82982a44e43e0ed41ee3568d34c43378472c4ad70abc0278c8546b5eab1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [b237f65b1f52cc0946de1dff121714fb7635a1fbe982c80c699291423b82ccf8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-048296
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-048296
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
	                    minikube.k8s.io/name=no-preload-048296
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_10T00_05_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:05:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-048296
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:19:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:15:29 +0000   Tue, 10 Dec 2024 00:05:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:15:29 +0000   Tue, 10 Dec 2024 00:05:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:15:29 +0000   Tue, 10 Dec 2024 00:05:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:15:29 +0000   Tue, 10 Dec 2024 00:05:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.182
	  Hostname:    no-preload-048296
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f1dc8b7771a4a5b875e807a54aba941
	  System UUID:                0f1dc8b7-771a-4a5b-875e-807a54aba941
	  Boot ID:                    a9df5da5-b4ac-4b82-9890-7eae9599cfa2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-56djc                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-8rxx7                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-no-preload-048296                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-048296             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-048296    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-qklxb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-no-preload-048296             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-n2f8c              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         13m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-048296 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-048296 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-048296 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node no-preload-048296 event: Registered Node no-preload-048296 in Controller
	
	
	==> dmesg <==
	[  +0.039409] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.141120] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.947644] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.626267] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.443706] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.061690] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057629] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.167722] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.135919] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.250292] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[Dec10 00:00] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.070665] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.699818] systemd-fstab-generator[1427]: Ignoring "noauto" option for root device
	[  +2.679912] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.136189] kauditd_printk_skb: 53 callbacks suppressed
	[ +27.680489] kauditd_printk_skb: 32 callbacks suppressed
	[Dec10 00:04] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.776819] systemd-fstab-generator[3106]: Ignoring "noauto" option for root device
	[Dec10 00:05] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.468178] systemd-fstab-generator[3423]: Ignoring "noauto" option for root device
	[  +5.395530] systemd-fstab-generator[3549]: Ignoring "noauto" option for root device
	[  +0.105350] kauditd_printk_skb: 14 callbacks suppressed
	[Dec10 00:06] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [2ad7012c4523526373176e1bdb3740fe4c3f0da98aefab113c29ec3fd8fc86dd] <==
	{"level":"info","ts":"2024-12-10T00:05:01.011087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"237a0a9f829d3d83 switched to configuration voters=(2556367418693598595)"}
	{"level":"info","ts":"2024-12-10T00:05:01.011200Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5f1ac972b1bdc8ed","local-member-id":"237a0a9f829d3d83","added-peer-id":"237a0a9f829d3d83","added-peer-peer-urls":["https://192.168.61.182:2380"]}
	{"level":"info","ts":"2024-12-10T00:05:01.813144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"237a0a9f829d3d83 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-10T00:05:01.813226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"237a0a9f829d3d83 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-10T00:05:01.813264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"237a0a9f829d3d83 received MsgPreVoteResp from 237a0a9f829d3d83 at term 1"}
	{"level":"info","ts":"2024-12-10T00:05:01.813277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"237a0a9f829d3d83 became candidate at term 2"}
	{"level":"info","ts":"2024-12-10T00:05:01.813283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"237a0a9f829d3d83 received MsgVoteResp from 237a0a9f829d3d83 at term 2"}
	{"level":"info","ts":"2024-12-10T00:05:01.813299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"237a0a9f829d3d83 became leader at term 2"}
	{"level":"info","ts":"2024-12-10T00:05:01.813306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 237a0a9f829d3d83 elected leader 237a0a9f829d3d83 at term 2"}
	{"level":"info","ts":"2024-12-10T00:05:01.814523Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"237a0a9f829d3d83","local-member-attributes":"{Name:no-preload-048296 ClientURLs:[https://192.168.61.182:2379]}","request-path":"/0/members/237a0a9f829d3d83/attributes","cluster-id":"5f1ac972b1bdc8ed","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-10T00:05:01.814624Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T00:05:01.814686Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T00:05:01.815099Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:05:01.816181Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T00:05:01.817621Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T00:05:01.818173Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-10T00:05:01.820145Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-10T00:05:01.820915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-10T00:05:01.821484Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.182:2379"}
	{"level":"info","ts":"2024-12-10T00:05:01.821734Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5f1ac972b1bdc8ed","local-member-id":"237a0a9f829d3d83","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:05:01.827989Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:05:01.828041Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:15:01.863976Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":682}
	{"level":"info","ts":"2024-12-10T00:15:01.872329Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":682,"took":"8.057004ms","hash":592612879,"current-db-size-bytes":2351104,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2351104,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-12-10T00:15:01.872386Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":592612879,"revision":682,"compact-revision":-1}
	
	
	==> kernel <==
	 00:19:07 up 19 min,  0 users,  load average: 0.04, 0.18, 0.14
	Linux no-preload-048296 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9280dbddeda2f1b7a46a964534d86646310bb4e84b9820f666730c6b111b08f6] <==
	W1210 00:15:04.185670       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:15:04.185739       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 00:15:04.186921       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 00:15:04.187013       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 00:16:04.187500       1 handler_proxy.go:99] no RequestInfo found in the context
	W1210 00:16:04.187510       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:16:04.187693       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1210 00:16:04.187734       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 00:16:04.188860       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 00:16:04.188865       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 00:18:04.189170       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:18:04.189303       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1210 00:18:04.189407       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 00:18:04.189497       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 00:18:04.190449       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 00:18:04.191583       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [a425c2d931ef8748e90cd391f2121a62c9cbbaf1a686b2b9f5f811110fffbf58] <==
	W1210 00:04:54.970227       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:54.986188       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.032527       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.038141       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.098727       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.112732       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.136217       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.159225       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.160487       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.350574       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.370158       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.376537       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.416096       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.424705       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.440513       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.443109       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.449507       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.571442       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.577803       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.662231       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.698343       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.699791       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:55.818961       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:56.001537       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:04:56.110688       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [c119307a718a68531ce047e24f34ab358273d68db90f69181c4d18122412639e] <==
	E1210 00:13:40.187496       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:13:40.633938       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:14:10.195185       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:14:10.641400       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:14:40.200511       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:14:40.647809       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:15:10.207594       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:15:10.655452       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 00:15:29.377029       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-048296"
	E1210 00:15:40.214161       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:15:40.661973       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:16:10.219644       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:16:10.671796       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 00:16:21.756970       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="74.209µs"
	I1210 00:16:35.762437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="50.98µs"
	E1210 00:16:40.224809       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:16:40.680112       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:17:10.230423       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:17:10.686738       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:17:40.236673       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:17:40.695736       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:18:10.243101       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:18:10.705420       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 00:18:40.248279       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 00:18:40.711972       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [7b333a8bf49675208e97a25fca7ababa5c5e7aa1fa332e2d69b8abbca962a2c4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1210 00:05:11.990681       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1210 00:05:12.023994       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.182"]
	E1210 00:05:12.024121       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 00:05:12.431239       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1210 00:05:12.431307       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 00:05:12.431333       1 server_linux.go:169] "Using iptables Proxier"
	I1210 00:05:12.439480       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 00:05:12.439688       1 server.go:483] "Version info" version="v1.31.2"
	I1210 00:05:12.439720       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 00:05:12.465079       1 config.go:199] "Starting service config controller"
	I1210 00:05:12.465113       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1210 00:05:12.465144       1 config.go:105] "Starting endpoint slice config controller"
	I1210 00:05:12.465148       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1210 00:05:12.465530       1 config.go:328] "Starting node config controller"
	I1210 00:05:12.465557       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1210 00:05:12.566263       1 shared_informer.go:320] Caches are synced for node config
	I1210 00:05:12.566272       1 shared_informer.go:320] Caches are synced for service config
	I1210 00:05:12.566324       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a63e80d74c90d945f31cb7bab277568619312b598dfb63dd17020118f61ed233] <==
	W1210 00:05:03.222985       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1210 00:05:03.223024       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:03.223091       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1210 00:05:03.223119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:03.223375       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 00:05:03.223459       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:03.226074       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1210 00:05:03.226113       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:04.041310       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1210 00:05:04.041373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:04.088034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1210 00:05:04.088096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:04.190450       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1210 00:05:04.190494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:04.263377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 00:05:04.263426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:04.365287       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1210 00:05:04.365340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:04.370243       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1210 00:05:04.370289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:04.441548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1210 00:05:04.441687       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 00:05:04.466720       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1210 00:05:04.466800       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1210 00:05:07.682567       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 00:18:05 no-preload-048296 kubelet[3430]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 00:18:05 no-preload-048296 kubelet[3430]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 00:18:05 no-preload-048296 kubelet[3430]: E1210 00:18:05.963002    3430 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789885962674248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:18:05 no-preload-048296 kubelet[3430]: E1210 00:18:05.963036    3430 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789885962674248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:18:15 no-preload-048296 kubelet[3430]: E1210 00:18:15.967225    3430 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789895966407068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:18:15 no-preload-048296 kubelet[3430]: E1210 00:18:15.967321    3430 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789895966407068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:18:17 no-preload-048296 kubelet[3430]: E1210 00:18:17.742425    3430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n2f8c" podUID="8e9f56c9-fd67-4715-9148-1255be17f1fe"
	Dec 10 00:18:25 no-preload-048296 kubelet[3430]: E1210 00:18:25.968886    3430 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789905968593871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:18:25 no-preload-048296 kubelet[3430]: E1210 00:18:25.968911    3430 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789905968593871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:18:32 no-preload-048296 kubelet[3430]: E1210 00:18:32.741528    3430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n2f8c" podUID="8e9f56c9-fd67-4715-9148-1255be17f1fe"
	Dec 10 00:18:35 no-preload-048296 kubelet[3430]: E1210 00:18:35.971350    3430 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789915971079609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:18:35 no-preload-048296 kubelet[3430]: E1210 00:18:35.971421    3430 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789915971079609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:18:43 no-preload-048296 kubelet[3430]: E1210 00:18:43.741635    3430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n2f8c" podUID="8e9f56c9-fd67-4715-9148-1255be17f1fe"
	Dec 10 00:18:45 no-preload-048296 kubelet[3430]: E1210 00:18:45.972891    3430 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789925972585812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:18:45 no-preload-048296 kubelet[3430]: E1210 00:18:45.972933    3430 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789925972585812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:18:55 no-preload-048296 kubelet[3430]: E1210 00:18:55.975215    3430 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789935974129436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:18:55 no-preload-048296 kubelet[3430]: E1210 00:18:55.975271    3430 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789935974129436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:18:58 no-preload-048296 kubelet[3430]: E1210 00:18:58.741796    3430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n2f8c" podUID="8e9f56c9-fd67-4715-9148-1255be17f1fe"
	Dec 10 00:19:05 no-preload-048296 kubelet[3430]: E1210 00:19:05.755431    3430 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 10 00:19:05 no-preload-048296 kubelet[3430]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 10 00:19:05 no-preload-048296 kubelet[3430]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 10 00:19:05 no-preload-048296 kubelet[3430]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 00:19:05 no-preload-048296 kubelet[3430]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 00:19:05 no-preload-048296 kubelet[3430]: E1210 00:19:05.976430    3430 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789945976068160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:19:05 no-preload-048296 kubelet[3430]: E1210 00:19:05.976458    3430 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789945976068160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [97d81c851470ff8904f2a704c67542ee6316104d04ab675437a01ea6f07fbfd8] <==
	I1210 00:05:12.693725       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 00:05:12.713369       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 00:05:12.713514       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 00:05:12.725931       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 00:05:12.726255       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-048296_dbf8be4c-2ead-454a-9e06-4ce0104bad17!
	I1210 00:05:12.728292       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"90ea2f99-6d89-41c8-bb4b-6fa2ca14b65a", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-048296_dbf8be4c-2ead-454a-9e06-4ce0104bad17 became leader
	I1210 00:05:12.827570       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-048296_dbf8be4c-2ead-454a-9e06-4ce0104bad17!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-048296 -n no-preload-048296
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-048296 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-n2f8c
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-048296 describe pod metrics-server-6867b74b74-n2f8c
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-048296 describe pod metrics-server-6867b74b74-n2f8c: exit status 1 (65.706305ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-n2f8c" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-048296 describe pod metrics-server-6867b74b74-n2f8c: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (285.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (128.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:18:00.639550   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/calico-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:18:12.522445   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:18:28.185874   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
E1210 00:18:38.331179   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.188:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.188:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-720064 -n old-k8s-version-720064
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-720064 -n old-k8s-version-720064: exit status 2 (235.933519ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-720064" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-720064 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-720064 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.065µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-720064 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-720064 -n old-k8s-version-720064
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-720064 -n old-k8s-version-720064: exit status 2 (238.437668ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-720064 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-720064 logs -n 25: (1.483467671s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-030585 sudo cat                              | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo                                  | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo find                             | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-030585 sudo crio                             | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-030585                                       | bridge-030585                | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-866797 | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:50 UTC |
	|         | disable-driver-mounts-866797                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:50 UTC | 09 Dec 24 23:52 UTC |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-048296             | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC | 09 Dec 24 23:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-825613            | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC | 09 Dec 24 23:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-825613                                  | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-871210  | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:52 UTC | 09 Dec 24 23:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:52 UTC |                     |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-720064        | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-048296                  | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-825613                 | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-048296                                   | no-preload-048296            | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC | 10 Dec 24 00:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-825613                                  | embed-certs-825613           | jenkins | v1.34.0 | 09 Dec 24 23:53 UTC | 10 Dec 24 00:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-871210       | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-871210 | jenkins | v1.34.0 | 09 Dec 24 23:54 UTC | 10 Dec 24 00:04 UTC |
	|         | default-k8s-diff-port-871210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-720064             | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-720064                              | old-k8s-version-720064       | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:55:25
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:55:25.509412   84547 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:55:25.509516   84547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:25.509524   84547 out.go:358] Setting ErrFile to fd 2...
	I1209 23:55:25.509541   84547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:25.509720   84547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 23:55:25.510225   84547 out.go:352] Setting JSON to false
	I1209 23:55:25.511117   84547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9476,"bootTime":1733779049,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:55:25.511206   84547 start.go:139] virtualization: kvm guest
	I1209 23:55:25.513214   84547 out.go:177] * [old-k8s-version-720064] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:55:25.514823   84547 notify.go:220] Checking for updates...
	I1209 23:55:25.514845   84547 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:55:25.516067   84547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:55:25.517350   84547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:55:25.518678   84547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 23:55:25.520082   84547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:55:25.521432   84547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:55:25.523092   84547 config.go:182] Loaded profile config "old-k8s-version-720064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 23:55:25.523499   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:55:25.523548   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:55:25.538209   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36805
	I1209 23:55:25.538728   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:55:25.539263   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:55:25.539282   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:55:25.539570   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:55:25.539735   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:55:25.541550   84547 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1209 23:55:25.542864   84547 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:55:25.543159   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:55:25.543196   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:55:25.558672   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I1209 23:55:25.559075   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:55:25.559524   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:55:25.559547   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:55:25.559881   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:55:25.560082   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:55:25.595916   84547 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 23:55:25.597365   84547 start.go:297] selected driver: kvm2
	I1209 23:55:25.597380   84547 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:55:25.597473   84547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:55:25.598182   84547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:55:25.598251   84547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 23:55:25.612993   84547 install.go:137] /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1209 23:55:25.613401   84547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:55:25.613430   84547 cni.go:84] Creating CNI manager for ""
	I1209 23:55:25.613473   84547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:55:25.613509   84547 start.go:340] cluster config:
	{Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:55:25.613608   84547 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:55:25.615371   84547 out.go:177] * Starting "old-k8s-version-720064" primary control-plane node in "old-k8s-version-720064" cluster
	I1209 23:55:25.371778   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:25.616599   84547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:55:25.616629   84547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1209 23:55:25.616636   84547 cache.go:56] Caching tarball of preloaded images
	I1209 23:55:25.616716   84547 preload.go:172] Found /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 23:55:25.616727   84547 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1209 23:55:25.616809   84547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/config.json ...
	I1209 23:55:25.616984   84547 start.go:360] acquireMachinesLock for old-k8s-version-720064: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:55:28.439810   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:34.519811   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:37.591794   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:43.671858   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:46.743891   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:52.823826   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:55:55.895854   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:01.975845   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:05.047862   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:11.127840   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:14.199834   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:20.279853   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:23.351850   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:29.431829   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:32.503859   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:38.583822   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:41.655831   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:47.735829   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:50.807862   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:56.887827   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:56:59.959820   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:06.039852   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:09.111843   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:15.191823   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:18.263824   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:24.343852   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:27.415807   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:33.495843   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:36.567854   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:42.647854   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:45.719902   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:51.799842   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:57:54.871862   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:00.951877   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:04.023859   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:10.103869   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:13.175788   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:19.255805   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:22.327784   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:28.407842   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:31.479864   83859 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.182:22: connect: no route to host
	I1209 23:58:34.483630   83900 start.go:364] duration metric: took 4m35.142298634s to acquireMachinesLock for "embed-certs-825613"
	I1209 23:58:34.483690   83900 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:58:34.483698   83900 fix.go:54] fixHost starting: 
	I1209 23:58:34.484038   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:58:34.484080   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:58:34.499267   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I1209 23:58:34.499717   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:58:34.500208   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:58:34.500236   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:58:34.500552   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:58:34.500747   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:34.500886   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:58:34.502318   83900 fix.go:112] recreateIfNeeded on embed-certs-825613: state=Stopped err=<nil>
	I1209 23:58:34.502340   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	W1209 23:58:34.502480   83900 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:58:34.504290   83900 out.go:177] * Restarting existing kvm2 VM for "embed-certs-825613" ...
	I1209 23:58:34.481505   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:58:34.481545   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:58:34.481889   83859 buildroot.go:166] provisioning hostname "no-preload-048296"
	I1209 23:58:34.481917   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:58:34.482120   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:58:34.483492   83859 machine.go:96] duration metric: took 4m37.418278223s to provisionDockerMachine
	I1209 23:58:34.483534   83859 fix.go:56] duration metric: took 4m37.439725581s for fixHost
	I1209 23:58:34.483542   83859 start.go:83] releasing machines lock for "no-preload-048296", held for 4m37.439753726s
	W1209 23:58:34.483579   83859 start.go:714] error starting host: provision: host is not running
	W1209 23:58:34.483682   83859 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1209 23:58:34.483694   83859 start.go:729] Will try again in 5 seconds ...
	I1209 23:58:34.505520   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Start
	I1209 23:58:34.505676   83900 main.go:141] libmachine: (embed-certs-825613) Ensuring networks are active...
	I1209 23:58:34.506381   83900 main.go:141] libmachine: (embed-certs-825613) Ensuring network default is active
	I1209 23:58:34.506676   83900 main.go:141] libmachine: (embed-certs-825613) Ensuring network mk-embed-certs-825613 is active
	I1209 23:58:34.507046   83900 main.go:141] libmachine: (embed-certs-825613) Getting domain xml...
	I1209 23:58:34.507727   83900 main.go:141] libmachine: (embed-certs-825613) Creating domain...
	I1209 23:58:35.719784   83900 main.go:141] libmachine: (embed-certs-825613) Waiting to get IP...
	I1209 23:58:35.720797   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:35.721325   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:35.721377   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:35.721289   85186 retry.go:31] will retry after 251.225165ms: waiting for machine to come up
	I1209 23:58:35.973801   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:35.974247   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:35.974276   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:35.974205   85186 retry.go:31] will retry after 298.029679ms: waiting for machine to come up
	I1209 23:58:36.273658   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:36.274090   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:36.274119   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:36.274049   85186 retry.go:31] will retry after 467.217404ms: waiting for machine to come up
	I1209 23:58:36.742591   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:36.743027   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:36.743052   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:36.742982   85186 retry.go:31] will retry after 409.741731ms: waiting for machine to come up
	I1209 23:58:37.154543   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:37.154958   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:37.154982   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:37.154916   85186 retry.go:31] will retry after 723.02759ms: waiting for machine to come up
	I1209 23:58:37.878986   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:37.879474   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:37.879507   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:37.879423   85186 retry.go:31] will retry after 767.399861ms: waiting for machine to come up
	I1209 23:58:38.648416   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:38.648903   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:38.648927   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:38.648853   85186 retry.go:31] will retry after 1.06423499s: waiting for machine to come up
	I1209 23:58:39.485363   83859 start.go:360] acquireMachinesLock for no-preload-048296: {Name:mk4dcfc37a573a1a4ae580e307534187cad5176b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:58:39.714711   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:39.715114   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:39.715147   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:39.715058   85186 retry.go:31] will retry after 1.402884688s: waiting for machine to come up
	I1209 23:58:41.119828   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:41.120315   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:41.120359   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:41.120258   85186 retry.go:31] will retry after 1.339874475s: waiting for machine to come up
	I1209 23:58:42.461858   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:42.462314   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:42.462343   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:42.462261   85186 retry.go:31] will retry after 1.455809273s: waiting for machine to come up
	I1209 23:58:43.920097   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:43.920418   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:43.920448   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:43.920371   85186 retry.go:31] will retry after 2.872969061s: waiting for machine to come up
	I1209 23:58:46.796163   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:46.796477   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:46.796499   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:46.796439   85186 retry.go:31] will retry after 2.530677373s: waiting for machine to come up
	I1209 23:58:49.330146   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:49.330601   83900 main.go:141] libmachine: (embed-certs-825613) DBG | unable to find current IP address of domain embed-certs-825613 in network mk-embed-certs-825613
	I1209 23:58:49.330631   83900 main.go:141] libmachine: (embed-certs-825613) DBG | I1209 23:58:49.330561   85186 retry.go:31] will retry after 4.492372507s: waiting for machine to come up
	I1209 23:58:53.827982   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.828461   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has current primary IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.828480   83900 main.go:141] libmachine: (embed-certs-825613) Found IP for machine: 192.168.50.19
	I1209 23:58:53.828492   83900 main.go:141] libmachine: (embed-certs-825613) Reserving static IP address...
	I1209 23:58:53.829001   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "embed-certs-825613", mac: "52:54:00:0f:2e:da", ip: "192.168.50.19"} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.829028   83900 main.go:141] libmachine: (embed-certs-825613) Reserved static IP address: 192.168.50.19
	I1209 23:58:53.829051   83900 main.go:141] libmachine: (embed-certs-825613) DBG | skip adding static IP to network mk-embed-certs-825613 - found existing host DHCP lease matching {name: "embed-certs-825613", mac: "52:54:00:0f:2e:da", ip: "192.168.50.19"}
	I1209 23:58:53.829067   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Getting to WaitForSSH function...
	I1209 23:58:53.829080   83900 main.go:141] libmachine: (embed-certs-825613) Waiting for SSH to be available...
	I1209 23:58:53.831079   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.831430   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.831462   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.831630   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Using SSH client type: external
	I1209 23:58:53.831682   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa (-rw-------)
	I1209 23:58:53.831723   83900 main.go:141] libmachine: (embed-certs-825613) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:58:53.831752   83900 main.go:141] libmachine: (embed-certs-825613) DBG | About to run SSH command:
	I1209 23:58:53.831765   83900 main.go:141] libmachine: (embed-certs-825613) DBG | exit 0
	I1209 23:58:53.959446   83900 main.go:141] libmachine: (embed-certs-825613) DBG | SSH cmd err, output: <nil>: 
	I1209 23:58:53.959864   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetConfigRaw
	I1209 23:58:53.960515   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:53.963227   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.963614   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.963644   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.963889   83900 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/config.json ...
	I1209 23:58:53.964086   83900 machine.go:93] provisionDockerMachine start ...
	I1209 23:58:53.964103   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:53.964299   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:53.966516   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.966803   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:53.966834   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:53.966959   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:53.967149   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:53.967292   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:53.967428   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:53.967599   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:53.967824   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:53.967839   83900 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:58:54.079701   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:58:54.079732   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetMachineName
	I1209 23:58:54.080045   83900 buildroot.go:166] provisioning hostname "embed-certs-825613"
	I1209 23:58:54.080079   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetMachineName
	I1209 23:58:54.080333   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.082930   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.083283   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.083321   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.083420   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.083657   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.083834   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.083974   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.084095   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.084269   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.084282   83900 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-825613 && echo "embed-certs-825613" | sudo tee /etc/hostname
	I1209 23:58:54.208724   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-825613
	
	I1209 23:58:54.208754   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.211739   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.212095   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.212122   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.212297   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.212513   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.212763   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.212936   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.213102   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.213285   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.213311   83900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-825613' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-825613/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-825613' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:58:55.043989   84259 start.go:364] duration metric: took 4m2.194304919s to acquireMachinesLock for "default-k8s-diff-port-871210"
	I1209 23:58:55.044059   84259 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:58:55.044067   84259 fix.go:54] fixHost starting: 
	I1209 23:58:55.044480   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:58:55.044546   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:58:55.060908   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45091
	I1209 23:58:55.061391   84259 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:58:55.061954   84259 main.go:141] libmachine: Using API Version  1
	I1209 23:58:55.061982   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:58:55.062346   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:58:55.062508   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:58:55.062674   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1209 23:58:55.064365   84259 fix.go:112] recreateIfNeeded on default-k8s-diff-port-871210: state=Stopped err=<nil>
	I1209 23:58:55.064386   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	W1209 23:58:55.064564   84259 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:58:55.066707   84259 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-871210" ...
	I1209 23:58:54.331911   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:58:54.331941   83900 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:58:54.331966   83900 buildroot.go:174] setting up certificates
	I1209 23:58:54.331978   83900 provision.go:84] configureAuth start
	I1209 23:58:54.331991   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetMachineName
	I1209 23:58:54.332275   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:54.334597   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.334926   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.334957   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.335117   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.337393   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.337731   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.337763   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.337876   83900 provision.go:143] copyHostCerts
	I1209 23:58:54.337923   83900 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:58:54.337936   83900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:58:54.338006   83900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:58:54.338093   83900 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:58:54.338101   83900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:58:54.338127   83900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:58:54.338204   83900 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:58:54.338212   83900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:58:54.338233   83900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:58:54.338286   83900 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.embed-certs-825613 san=[127.0.0.1 192.168.50.19 embed-certs-825613 localhost minikube]
	I1209 23:58:54.423610   83900 provision.go:177] copyRemoteCerts
	I1209 23:58:54.423679   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:58:54.423706   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.426695   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.427009   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.427041   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.427144   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.427332   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.427516   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.427706   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:54.513991   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 23:58:54.536700   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:58:54.558541   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:58:54.580319   83900 provision.go:87] duration metric: took 248.326924ms to configureAuth
	I1209 23:58:54.580359   83900 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:58:54.580568   83900 config.go:182] Loaded profile config "embed-certs-825613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:58:54.580652   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.583347   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.583720   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.583748   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.583963   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.584171   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.584327   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.584481   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.584621   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.584775   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.584790   83900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:58:54.804814   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:58:54.804860   83900 machine.go:96] duration metric: took 840.759848ms to provisionDockerMachine
	I1209 23:58:54.804876   83900 start.go:293] postStartSetup for "embed-certs-825613" (driver="kvm2")
	I1209 23:58:54.804891   83900 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:58:54.804916   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:54.805269   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:58:54.805297   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.807977   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.808340   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.808370   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.808505   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.808765   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.808945   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.809100   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:54.893741   83900 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:58:54.897936   83900 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:58:54.897961   83900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:58:54.898065   83900 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:58:54.898145   83900 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:58:54.898235   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:58:54.907056   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:58:54.929618   83900 start.go:296] duration metric: took 124.726532ms for postStartSetup
	I1209 23:58:54.929664   83900 fix.go:56] duration metric: took 20.445966428s for fixHost
	I1209 23:58:54.929711   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:54.932476   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.932866   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:54.932893   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:54.933108   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:54.933334   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.933523   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:54.933638   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:54.933747   83900 main.go:141] libmachine: Using SSH client type: native
	I1209 23:58:54.933956   83900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.19 22 <nil> <nil>}
	I1209 23:58:54.933967   83900 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:58:55.043841   83900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788735.012193373
	
	I1209 23:58:55.043863   83900 fix.go:216] guest clock: 1733788735.012193373
	I1209 23:58:55.043870   83900 fix.go:229] Guest: 2024-12-09 23:58:55.012193373 +0000 UTC Remote: 2024-12-09 23:58:54.929689658 +0000 UTC m=+295.728685701 (delta=82.503715ms)
	I1209 23:58:55.043888   83900 fix.go:200] guest clock delta is within tolerance: 82.503715ms
	I1209 23:58:55.043892   83900 start.go:83] releasing machines lock for "embed-certs-825613", held for 20.560223511s
	I1209 23:58:55.043915   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.044198   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:55.046852   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.047246   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:55.047283   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.047446   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.047910   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.048101   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:58:55.048198   83900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:58:55.048235   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:55.048424   83900 ssh_runner.go:195] Run: cat /version.json
	I1209 23:58:55.048453   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:58:55.050905   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051274   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:55.051317   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051339   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051502   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:55.051721   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:55.051772   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:55.051794   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:55.051925   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:55.051980   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:58:55.052091   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:55.052133   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:58:55.052263   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:58:55.052407   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:58:55.136384   83900 ssh_runner.go:195] Run: systemctl --version
	I1209 23:58:55.154927   83900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:58:55.297639   83900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:58:55.303733   83900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:58:55.303791   83900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:58:55.323858   83900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:58:55.323884   83900 start.go:495] detecting cgroup driver to use...
	I1209 23:58:55.323955   83900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:58:55.342390   83900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:58:55.360400   83900 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:58:55.360463   83900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:58:55.374005   83900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:58:55.387515   83900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:58:55.507866   83900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:58:55.676259   83900 docker.go:233] disabling docker service ...
	I1209 23:58:55.676338   83900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:58:55.695273   83900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:58:55.707909   83900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:58:55.824683   83900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:58:55.934080   83900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:58:55.950700   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:58:55.967756   83900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:58:55.967813   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:55.978028   83900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:58:55.978102   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:55.988960   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:55.999661   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.010253   83900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:58:56.020928   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.030944   83900 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.050251   83900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:58:56.062723   83900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:58:56.072435   83900 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:58:56.072501   83900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:58:56.085332   83900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:58:56.095538   83900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:58:56.214133   83900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:58:56.310023   83900 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:58:56.310107   83900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:58:56.314988   83900 start.go:563] Will wait 60s for crictl version
	I1209 23:58:56.315057   83900 ssh_runner.go:195] Run: which crictl
	I1209 23:58:56.318865   83900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:58:56.356889   83900 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:58:56.356996   83900 ssh_runner.go:195] Run: crio --version
	I1209 23:58:56.387128   83900 ssh_runner.go:195] Run: crio --version
	I1209 23:58:56.417781   83900 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:58:55.068139   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Start
	I1209 23:58:55.068329   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Ensuring networks are active...
	I1209 23:58:55.069026   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Ensuring network default is active
	I1209 23:58:55.069526   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Ensuring network mk-default-k8s-diff-port-871210 is active
	I1209 23:58:55.069970   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Getting domain xml...
	I1209 23:58:55.070725   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Creating domain...
	I1209 23:58:56.366161   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting to get IP...
	I1209 23:58:56.367215   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.367659   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.367720   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:56.367639   85338 retry.go:31] will retry after 212.733452ms: waiting for machine to come up
	I1209 23:58:56.582400   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.582945   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.582973   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:56.582895   85338 retry.go:31] will retry after 380.03081ms: waiting for machine to come up
	I1209 23:58:56.964721   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.965265   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:56.965296   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:56.965208   85338 retry.go:31] will retry after 429.612511ms: waiting for machine to come up
	I1209 23:58:57.396713   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.397267   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.397298   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:57.397236   85338 retry.go:31] will retry after 595.581233ms: waiting for machine to come up
	I1209 23:58:56.418967   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetIP
	I1209 23:58:56.422317   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:56.422632   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:58:56.422656   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:58:56.422925   83900 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1209 23:58:56.426992   83900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:58:56.440436   83900 kubeadm.go:883] updating cluster {Name:embed-certs-825613 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-825613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:58:56.440548   83900 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:58:56.440592   83900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:58:56.475823   83900 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:58:56.475907   83900 ssh_runner.go:195] Run: which lz4
	I1209 23:58:56.479747   83900 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:58:56.483850   83900 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:58:56.483900   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 23:58:57.820979   83900 crio.go:462] duration metric: took 1.341265764s to copy over tarball
	I1209 23:58:57.821091   83900 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:58:57.994077   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.994566   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:57.994590   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:57.994526   85338 retry.go:31] will retry after 728.981009ms: waiting for machine to come up
	I1209 23:58:58.725707   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:58.726209   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:58.726252   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:58.726125   85338 retry.go:31] will retry after 701.836089ms: waiting for machine to come up
	I1209 23:58:59.429804   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:58:59.430322   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:58:59.430350   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:58:59.430266   85338 retry.go:31] will retry after 735.800538ms: waiting for machine to come up
	I1209 23:59:00.167774   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:00.168363   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:00.168397   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:00.168319   85338 retry.go:31] will retry after 1.052511845s: waiting for machine to come up
	I1209 23:59:01.222463   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:01.222960   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:01.222991   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:01.222919   85338 retry.go:31] will retry after 1.655880765s: waiting for machine to come up
	I1209 23:58:59.925304   83900 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.104178296s)
	I1209 23:58:59.925338   83900 crio.go:469] duration metric: took 2.104325305s to extract the tarball
	I1209 23:58:59.925348   83900 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:58:59.962716   83900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:00.008762   83900 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:59:00.008793   83900 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:59:00.008803   83900 kubeadm.go:934] updating node { 192.168.50.19 8443 v1.31.2 crio true true} ...
	I1209 23:59:00.008929   83900 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-825613 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-825613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:59:00.009008   83900 ssh_runner.go:195] Run: crio config
	I1209 23:59:00.050777   83900 cni.go:84] Creating CNI manager for ""
	I1209 23:59:00.050801   83900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:00.050813   83900 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:59:00.050842   83900 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.19 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-825613 NodeName:embed-certs-825613 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:59:00.051001   83900 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-825613"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.19"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.19"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:59:00.051083   83900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:59:00.062278   83900 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:59:00.062354   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:59:00.073157   83900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1209 23:59:00.088933   83900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:59:00.104358   83900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1209 23:59:00.122425   83900 ssh_runner.go:195] Run: grep 192.168.50.19	control-plane.minikube.internal$ /etc/hosts
	I1209 23:59:00.126224   83900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:00.138974   83900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:00.252489   83900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:00.269127   83900 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613 for IP: 192.168.50.19
	I1209 23:59:00.269155   83900 certs.go:194] generating shared ca certs ...
	I1209 23:59:00.269177   83900 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:00.269359   83900 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:59:00.269406   83900 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:59:00.269421   83900 certs.go:256] generating profile certs ...
	I1209 23:59:00.269551   83900 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/client.key
	I1209 23:59:00.269651   83900 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/apiserver.key.12f830e7
	I1209 23:59:00.269724   83900 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/proxy-client.key
	I1209 23:59:00.269901   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:59:00.269954   83900 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:59:00.269968   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:59:00.270007   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:59:00.270056   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:59:00.270096   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:59:00.270161   83900 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:00.271078   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:59:00.322392   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:59:00.351665   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:59:00.378615   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:59:00.401622   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1209 23:59:00.430280   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 23:59:00.453489   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:59:00.478368   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/embed-certs-825613/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:59:00.501404   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:59:00.524273   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:59:00.546132   83900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:59:00.568553   83900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:59:00.584196   83900 ssh_runner.go:195] Run: openssl version
	I1209 23:59:00.589905   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:59:00.600107   83900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:59:00.604436   83900 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:59:00.604504   83900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:59:00.609960   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:59:00.620129   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:59:00.629895   83900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:00.633993   83900 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:00.634053   83900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:00.639538   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:59:00.649781   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:59:00.659593   83900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:59:00.663789   83900 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:59:00.663832   83900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:59:00.669033   83900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:59:00.678881   83900 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:59:00.683006   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:59:00.688631   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:59:00.694212   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:59:00.699930   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:59:00.705400   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:59:00.710668   83900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:59:00.715982   83900 kubeadm.go:392] StartCluster: {Name:embed-certs-825613 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-825613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:00.716059   83900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:59:00.716094   83900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:00.758093   83900 cri.go:89] found id: ""
	I1209 23:59:00.758164   83900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:59:00.767877   83900 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:59:00.767895   83900 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:59:00.767932   83900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:59:00.777172   83900 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:59:00.778578   83900 kubeconfig.go:125] found "embed-certs-825613" server: "https://192.168.50.19:8443"
	I1209 23:59:00.780691   83900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:59:00.790459   83900 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.19
	I1209 23:59:00.790492   83900 kubeadm.go:1160] stopping kube-system containers ...
	I1209 23:59:00.790507   83900 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 23:59:00.790563   83900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:00.831048   83900 cri.go:89] found id: ""
	I1209 23:59:00.831129   83900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 23:59:00.847797   83900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:59:00.857819   83900 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:59:00.857841   83900 kubeadm.go:157] found existing configuration files:
	
	I1209 23:59:00.857877   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:59:00.867108   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:59:00.867159   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:59:00.876742   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:59:00.885861   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:59:00.885942   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:59:00.895651   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:59:00.904027   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:59:00.904092   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:59:00.912956   83900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:59:00.921647   83900 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:59:00.921704   83900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:59:00.930445   83900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:59:00.939496   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:01.049561   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.012953   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.234060   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.302650   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:02.378572   83900 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:59:02.378677   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:02.878755   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:03.379050   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:03.879215   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:02.880033   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:02.880532   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:02.880560   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:02.880488   85338 retry.go:31] will retry after 2.112793708s: waiting for machine to come up
	I1209 23:59:04.994713   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:04.995223   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:04.995254   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:04.995183   85338 retry.go:31] will retry after 2.420394988s: waiting for machine to come up
	I1209 23:59:07.416846   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:07.417309   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:07.417338   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:07.417281   85338 retry.go:31] will retry after 2.479420656s: waiting for machine to come up
	I1209 23:59:04.379735   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:04.406446   83900 api_server.go:72] duration metric: took 2.027872494s to wait for apiserver process to appear ...
	I1209 23:59:04.406480   83900 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:59:04.406506   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:04.407056   83900 api_server.go:269] stopped: https://192.168.50.19:8443/healthz: Get "https://192.168.50.19:8443/healthz": dial tcp 192.168.50.19:8443: connect: connection refused
	I1209 23:59:04.907381   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:06.967755   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:06.967791   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:06.967808   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:07.003261   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:07.003295   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:07.406692   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:07.411139   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:07.411168   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:07.906689   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:07.911136   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:07.911176   83900 api_server.go:103] status: https://192.168.50.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:08.406697   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1209 23:59:08.411051   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 200:
	ok
	I1209 23:59:08.422472   83900 api_server.go:141] control plane version: v1.31.2
	I1209 23:59:08.422506   83900 api_server.go:131] duration metric: took 4.01601823s to wait for apiserver health ...
	I1209 23:59:08.422532   83900 cni.go:84] Creating CNI manager for ""
	I1209 23:59:08.422541   83900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:08.424238   83900 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 23:59:08.425539   83900 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 23:59:08.435424   83900 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 23:59:08.451320   83900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:59:08.461244   83900 system_pods.go:59] 8 kube-system pods found
	I1209 23:59:08.461285   83900 system_pods.go:61] "coredns-7c65d6cfc9-qvtlr" [1ef5707d-5f06-46d0-809b-da79f79c49e5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 23:59:08.461296   83900 system_pods.go:61] "etcd-embed-certs-825613" [3213929f-0200-4e7d-9b69-967463bfc194] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 23:59:08.461305   83900 system_pods.go:61] "kube-apiserver-embed-certs-825613" [d64b8c55-7da1-4b8e-864e-0727676b17a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 23:59:08.461313   83900 system_pods.go:61] "kube-controller-manager-embed-certs-825613" [61372146-3fe1-4f4e-9d8e-d85cc5ac8369] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 23:59:08.461322   83900 system_pods.go:61] "kube-proxy-rn6fg" [6db02558-bfa6-4c5f-a120-aed13575b273] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 23:59:08.461329   83900 system_pods.go:61] "kube-scheduler-embed-certs-825613" [b9c75ecd-add3-4a19-86e0-08326eea0f6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 23:59:08.461346   83900 system_pods.go:61] "metrics-server-6867b74b74-hg7c5" [2a657b1b-4435-42b5-aef2-deebf7865c83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 23:59:08.461358   83900 system_pods.go:61] "storage-provisioner" [e5cabae9-bb71-4b5e-9a43-0dce4a0733a1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 23:59:08.461368   83900 system_pods.go:74] duration metric: took 10.019045ms to wait for pod list to return data ...
	I1209 23:59:08.461381   83900 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:59:08.464945   83900 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 23:59:08.464973   83900 node_conditions.go:123] node cpu capacity is 2
	I1209 23:59:08.464986   83900 node_conditions.go:105] duration metric: took 3.600013ms to run NodePressure ...
	I1209 23:59:08.465011   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:08.762283   83900 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 23:59:08.766618   83900 kubeadm.go:739] kubelet initialised
	I1209 23:59:08.766640   83900 kubeadm.go:740] duration metric: took 4.332483ms waiting for restarted kubelet to initialise ...
	I1209 23:59:08.766648   83900 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:08.771039   83900 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.775795   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.775820   83900 pod_ready.go:82] duration metric: took 4.756744ms for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.775829   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.775836   83900 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.780473   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "etcd-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.780498   83900 pod_ready.go:82] duration metric: took 4.651756ms for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.780507   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "etcd-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.780517   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.785226   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.785252   83900 pod_ready.go:82] duration metric: took 4.725948ms for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.785261   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.785268   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:08.855086   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.855117   83900 pod_ready.go:82] duration metric: took 69.839948ms for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:08.855129   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:08.855141   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:09.255415   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-proxy-rn6fg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.255451   83900 pod_ready.go:82] duration metric: took 400.29383ms for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:09.255461   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-proxy-rn6fg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.255467   83900 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:09.654750   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.654775   83900 pod_ready.go:82] duration metric: took 399.301549ms for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:09.654785   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:09.654792   83900 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:10.054952   83900 pod_ready.go:98] node "embed-certs-825613" hosting pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:10.054979   83900 pod_ready.go:82] duration metric: took 400.177675ms for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	E1209 23:59:10.054995   83900 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-825613" hosting pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:10.055002   83900 pod_ready.go:39] duration metric: took 1.288346997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:10.055019   83900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 23:59:10.066533   83900 ops.go:34] apiserver oom_adj: -16
	I1209 23:59:10.066559   83900 kubeadm.go:597] duration metric: took 9.298658158s to restartPrimaryControlPlane
	I1209 23:59:10.066570   83900 kubeadm.go:394] duration metric: took 9.350595042s to StartCluster
	I1209 23:59:10.066590   83900 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:10.066674   83900 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:59:10.068469   83900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:10.068732   83900 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:59:10.068795   83900 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 23:59:10.068901   83900 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-825613"
	I1209 23:59:10.068925   83900 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-825613"
	W1209 23:59:10.068932   83900 addons.go:243] addon storage-provisioner should already be in state true
	I1209 23:59:10.068966   83900 config.go:182] Loaded profile config "embed-certs-825613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:59:10.068960   83900 addons.go:69] Setting default-storageclass=true in profile "embed-certs-825613"
	I1209 23:59:10.068969   83900 host.go:66] Checking if "embed-certs-825613" exists ...
	I1209 23:59:10.069011   83900 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-825613"
	I1209 23:59:10.068972   83900 addons.go:69] Setting metrics-server=true in profile "embed-certs-825613"
	I1209 23:59:10.069584   83900 addons.go:234] Setting addon metrics-server=true in "embed-certs-825613"
	W1209 23:59:10.069616   83900 addons.go:243] addon metrics-server should already be in state true
	I1209 23:59:10.069644   83900 host.go:66] Checking if "embed-certs-825613" exists ...
	I1209 23:59:10.070176   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.070220   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.070219   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.070255   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.070947   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.071024   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.071039   83900 out.go:177] * Verifying Kubernetes components...
	I1209 23:59:10.073122   83900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:10.085793   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I1209 23:59:10.086012   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I1209 23:59:10.086282   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.086412   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.086794   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.086823   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.086907   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.086930   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.087156   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38521
	I1209 23:59:10.087160   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.087251   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.087469   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.087617   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.087792   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.087828   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.088155   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.088177   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.088692   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.089323   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.089358   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.090339   83900 addons.go:234] Setting addon default-storageclass=true in "embed-certs-825613"
	W1209 23:59:10.090357   83900 addons.go:243] addon default-storageclass should already be in state true
	I1209 23:59:10.090379   83900 host.go:66] Checking if "embed-certs-825613" exists ...
	I1209 23:59:10.090609   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.090639   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.103251   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44103
	I1209 23:59:10.103791   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.104305   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.104325   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.104376   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37713
	I1209 23:59:10.104560   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
	I1209 23:59:10.104713   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.104736   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.104848   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.104854   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.105269   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.105285   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.105393   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.105414   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.105595   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.105751   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.105784   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.106203   83900 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:10.106235   83900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:10.107268   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:59:10.107710   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:59:10.109781   83900 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:10.109863   83900 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 23:59:10.111198   83900 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:59:10.111210   83900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 23:59:10.111224   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:59:10.111294   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 23:59:10.111300   83900 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 23:59:10.111309   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:59:10.114610   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.114929   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.114962   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:59:10.114980   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.115159   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:59:10.115318   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:59:10.115438   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:59:10.115474   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:59:10.115516   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.115599   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:59:10.115704   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:59:10.115844   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:59:10.115944   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:59:10.116149   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:59:10.123450   83900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1209 23:59:10.123926   83900 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:10.124406   83900 main.go:141] libmachine: Using API Version  1
	I1209 23:59:10.124445   83900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:10.124744   83900 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:10.124936   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetState
	I1209 23:59:10.126437   83900 main.go:141] libmachine: (embed-certs-825613) Calling .DriverName
	I1209 23:59:10.126619   83900 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 23:59:10.126637   83900 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 23:59:10.126656   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHHostname
	I1209 23:59:10.129218   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.129663   83900 main.go:141] libmachine: (embed-certs-825613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2e:da", ip: ""} in network mk-embed-certs-825613: {Iface:virbr2 ExpiryTime:2024-12-10 00:58:45 +0000 UTC Type:0 Mac:52:54:00:0f:2e:da Iaid: IPaddr:192.168.50.19 Prefix:24 Hostname:embed-certs-825613 Clientid:01:52:54:00:0f:2e:da}
	I1209 23:59:10.129695   83900 main.go:141] libmachine: (embed-certs-825613) DBG | domain embed-certs-825613 has defined IP address 192.168.50.19 and MAC address 52:54:00:0f:2e:da in network mk-embed-certs-825613
	I1209 23:59:10.129874   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHPort
	I1209 23:59:10.130055   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHKeyPath
	I1209 23:59:10.130164   83900 main.go:141] libmachine: (embed-certs-825613) Calling .GetSSHUsername
	I1209 23:59:10.130296   83900 sshutil.go:53] new ssh client: &{IP:192.168.50.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/embed-certs-825613/id_rsa Username:docker}
	I1209 23:59:10.272711   83900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:10.290121   83900 node_ready.go:35] waiting up to 6m0s for node "embed-certs-825613" to be "Ready" ...
	I1209 23:59:10.379383   83900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:59:10.397672   83900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 23:59:10.403543   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 23:59:10.403591   83900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 23:59:10.439890   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 23:59:10.439915   83900 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 23:59:10.482300   83900 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:59:10.482331   83900 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 23:59:10.550923   83900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:59:11.613572   83900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.234149831s)
	I1209 23:59:11.613622   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.613634   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.613655   83900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.21594498s)
	I1209 23:59:11.613701   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.613713   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.613929   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.613976   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.613992   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.613979   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.614006   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.614018   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.614032   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.614052   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.614000   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.614085   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.614332   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.614343   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.614343   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.614361   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.614362   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.614372   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.621731   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.621749   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.622001   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.622017   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.664217   83900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113253318s)
	I1209 23:59:11.664273   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.664288   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.664633   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.664637   83900 main.go:141] libmachine: (embed-certs-825613) DBG | Closing plugin on server side
	I1209 23:59:11.664649   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.664658   83900 main.go:141] libmachine: Making call to close driver server
	I1209 23:59:11.664676   83900 main.go:141] libmachine: (embed-certs-825613) Calling .Close
	I1209 23:59:11.664875   83900 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:59:11.664888   83900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:59:11.664903   83900 addons.go:475] Verifying addon metrics-server=true in "embed-certs-825613"
	I1209 23:59:11.666814   83900 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1209 23:59:09.899886   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:09.900284   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | unable to find current IP address of domain default-k8s-diff-port-871210 in network mk-default-k8s-diff-port-871210
	I1209 23:59:09.900314   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | I1209 23:59:09.900262   85338 retry.go:31] will retry after 3.641244983s: waiting for machine to come up
	I1209 23:59:11.668002   83900 addons.go:510] duration metric: took 1.599215886s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1209 23:59:12.293475   83900 node_ready.go:53] node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:14.960350   84547 start.go:364] duration metric: took 3m49.343321897s to acquireMachinesLock for "old-k8s-version-720064"
	I1209 23:59:14.960428   84547 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:59:14.960440   84547 fix.go:54] fixHost starting: 
	I1209 23:59:14.960886   84547 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:14.960950   84547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:14.981976   84547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37057
	I1209 23:59:14.982425   84547 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:14.982933   84547 main.go:141] libmachine: Using API Version  1
	I1209 23:59:14.982966   84547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:14.983341   84547 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:14.983587   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:14.983772   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetState
	I1209 23:59:14.985748   84547 fix.go:112] recreateIfNeeded on old-k8s-version-720064: state=Stopped err=<nil>
	I1209 23:59:14.985774   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	W1209 23:59:14.985968   84547 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:59:14.987652   84547 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-720064" ...
	I1209 23:59:14.988869   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .Start
	I1209 23:59:14.989086   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring networks are active...
	I1209 23:59:14.989817   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring network default is active
	I1209 23:59:14.990188   84547 main.go:141] libmachine: (old-k8s-version-720064) Ensuring network mk-old-k8s-version-720064 is active
	I1209 23:59:14.990640   84547 main.go:141] libmachine: (old-k8s-version-720064) Getting domain xml...
	I1209 23:59:14.991392   84547 main.go:141] libmachine: (old-k8s-version-720064) Creating domain...
	I1209 23:59:13.544971   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.545381   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Found IP for machine: 192.168.72.54
	I1209 23:59:13.545408   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has current primary IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.545418   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Reserving static IP address...
	I1209 23:59:13.545913   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-871210", mac: "52:54:00:5e:5b:a3", ip: "192.168.72.54"} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.545942   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | skip adding static IP to network mk-default-k8s-diff-port-871210 - found existing host DHCP lease matching {name: "default-k8s-diff-port-871210", mac: "52:54:00:5e:5b:a3", ip: "192.168.72.54"}
	I1209 23:59:13.545957   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Reserved static IP address: 192.168.72.54
	I1209 23:59:13.545973   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Waiting for SSH to be available...
	I1209 23:59:13.545988   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Getting to WaitForSSH function...
	I1209 23:59:13.548279   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.548641   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.548700   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.548788   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Using SSH client type: external
	I1209 23:59:13.548812   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa (-rw-------)
	I1209 23:59:13.548882   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:59:13.548908   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | About to run SSH command:
	I1209 23:59:13.548923   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | exit 0
	I1209 23:59:13.687704   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | SSH cmd err, output: <nil>: 
	I1209 23:59:13.688021   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetConfigRaw
	I1209 23:59:13.688673   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:13.690853   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.691187   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.691224   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.691421   84259 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/config.json ...
	I1209 23:59:13.691646   84259 machine.go:93] provisionDockerMachine start ...
	I1209 23:59:13.691665   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:13.691901   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:13.693958   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.694230   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.694257   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.694392   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:13.694602   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.694771   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.694913   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:13.695093   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:13.695278   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:13.695288   84259 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:59:13.803602   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:59:13.803629   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetMachineName
	I1209 23:59:13.803904   84259 buildroot.go:166] provisioning hostname "default-k8s-diff-port-871210"
	I1209 23:59:13.803928   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetMachineName
	I1209 23:59:13.804106   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:13.806369   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.806769   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.806800   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.806906   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:13.807053   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.807222   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.807329   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:13.807551   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:13.807751   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:13.807765   84259 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-871210 && echo "default-k8s-diff-port-871210" | sudo tee /etc/hostname
	I1209 23:59:13.932849   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-871210
	
	I1209 23:59:13.932874   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:13.935573   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.935943   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:13.935972   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:13.936143   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:13.936388   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.936546   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:13.936682   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:13.936840   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:13.937016   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:13.937046   84259 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-871210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-871210/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-871210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:59:14.057493   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:59:14.057526   84259 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:59:14.057565   84259 buildroot.go:174] setting up certificates
	I1209 23:59:14.057575   84259 provision.go:84] configureAuth start
	I1209 23:59:14.057586   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetMachineName
	I1209 23:59:14.057860   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:14.060619   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.060947   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.060973   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.061170   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.063591   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.063919   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.063946   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.064133   84259 provision.go:143] copyHostCerts
	I1209 23:59:14.064180   84259 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:59:14.064189   84259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:59:14.064242   84259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:59:14.064333   84259 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:59:14.064341   84259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:59:14.064364   84259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:59:14.064416   84259 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:59:14.064424   84259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:59:14.064442   84259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:59:14.064488   84259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-871210 san=[127.0.0.1 192.168.72.54 default-k8s-diff-port-871210 localhost minikube]
	I1209 23:59:14.332090   84259 provision.go:177] copyRemoteCerts
	I1209 23:59:14.332148   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:59:14.332174   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.334856   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.335275   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.335309   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.335428   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.335647   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.335860   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.336002   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:14.422641   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:59:14.445943   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1209 23:59:14.468188   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:59:14.490678   84259 provision.go:87] duration metric: took 433.06125ms to configureAuth
	I1209 23:59:14.490710   84259 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:59:14.490883   84259 config.go:182] Loaded profile config "default-k8s-diff-port-871210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:59:14.490953   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.493529   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.493858   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.493879   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.494073   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.494276   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.494435   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.494555   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.494716   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:14.494880   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:14.494899   84259 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:59:14.720424   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:59:14.720452   84259 machine.go:96] duration metric: took 1.028790944s to provisionDockerMachine
	I1209 23:59:14.720468   84259 start.go:293] postStartSetup for "default-k8s-diff-port-871210" (driver="kvm2")
	I1209 23:59:14.720480   84259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:59:14.720496   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.720814   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:59:14.720844   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.723417   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.723817   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.723851   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.723992   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.724171   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.724328   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.724453   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:14.810295   84259 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:59:14.814400   84259 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:59:14.814424   84259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:59:14.814501   84259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:59:14.814595   84259 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:59:14.814706   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:59:14.823765   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:14.847185   84259 start.go:296] duration metric: took 126.701807ms for postStartSetup
	I1209 23:59:14.847226   84259 fix.go:56] duration metric: took 19.803160459s for fixHost
	I1209 23:59:14.847245   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.850067   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.850488   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.850516   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.850697   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.850915   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.851082   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.851218   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.851369   84259 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:14.851558   84259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.54 22 <nil> <nil>}
	I1209 23:59:14.851588   84259 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:59:14.960167   84259 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788754.933814431
	
	I1209 23:59:14.960190   84259 fix.go:216] guest clock: 1733788754.933814431
	I1209 23:59:14.960198   84259 fix.go:229] Guest: 2024-12-09 23:59:14.933814431 +0000 UTC Remote: 2024-12-09 23:59:14.847230309 +0000 UTC m=+262.142902203 (delta=86.584122ms)
	I1209 23:59:14.960217   84259 fix.go:200] guest clock delta is within tolerance: 86.584122ms
	I1209 23:59:14.960222   84259 start.go:83] releasing machines lock for "default-k8s-diff-port-871210", held for 19.916185392s
	I1209 23:59:14.960244   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.960544   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:14.963243   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.963653   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.963693   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.963820   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.964285   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.964505   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1209 23:59:14.964612   84259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:59:14.964689   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.964749   84259 ssh_runner.go:195] Run: cat /version.json
	I1209 23:59:14.964772   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1209 23:59:14.967430   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.967811   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.967844   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.967871   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.968018   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.968194   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.968388   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.968405   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:14.968427   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:14.968563   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1209 23:59:14.968562   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:14.968706   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1209 23:59:14.968878   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1209 23:59:14.969038   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1209 23:59:15.072098   84259 ssh_runner.go:195] Run: systemctl --version
	I1209 23:59:15.077846   84259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:59:15.222108   84259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:59:15.230013   84259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:59:15.230097   84259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:59:15.247065   84259 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:59:15.247091   84259 start.go:495] detecting cgroup driver to use...
	I1209 23:59:15.247168   84259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:59:15.268262   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:59:15.283824   84259 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:59:15.283893   84259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:59:15.299384   84259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:59:15.318727   84259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:59:15.457157   84259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:59:15.642629   84259 docker.go:233] disabling docker service ...
	I1209 23:59:15.642718   84259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:59:15.661646   84259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:59:15.681887   84259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:59:15.838344   84259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:59:15.971028   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:59:15.984896   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:59:16.006631   84259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:59:16.006691   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.019058   84259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:59:16.019143   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.031838   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.043176   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.057386   84259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:59:16.068694   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.083326   84259 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.102288   84259 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:16.113538   84259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:59:16.126450   84259 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:59:16.126516   84259 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:59:16.147057   84259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:59:16.157329   84259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:16.285726   84259 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:59:16.385931   84259 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:59:16.386014   84259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:59:16.390840   84259 start.go:563] Will wait 60s for crictl version
	I1209 23:59:16.390912   84259 ssh_runner.go:195] Run: which crictl
	I1209 23:59:16.394870   84259 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:59:16.438893   84259 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:59:16.438986   84259 ssh_runner.go:195] Run: crio --version
	I1209 23:59:16.470118   84259 ssh_runner.go:195] Run: crio --version
	I1209 23:59:16.499603   84259 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:59:16.500665   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetIP
	I1209 23:59:16.503766   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:16.504242   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1209 23:59:16.504329   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1209 23:59:16.504609   84259 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1209 23:59:16.508742   84259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:16.520816   84259 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-871210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-871210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:59:16.520944   84259 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:59:16.520988   84259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:16.563220   84259 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:59:16.563315   84259 ssh_runner.go:195] Run: which lz4
	I1209 23:59:16.568962   84259 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:59:16.573715   84259 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:59:16.573756   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 23:59:14.294886   83900 node_ready.go:53] node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:16.796597   83900 node_ready.go:53] node "embed-certs-825613" has status "Ready":"False"
	I1209 23:59:17.795347   83900 node_ready.go:49] node "embed-certs-825613" has status "Ready":"True"
	I1209 23:59:17.795381   83900 node_ready.go:38] duration metric: took 7.505214814s for node "embed-certs-825613" to be "Ready" ...
	I1209 23:59:17.795394   83900 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:17.801650   83900 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:17.808087   83900 pod_ready.go:93] pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:17.808113   83900 pod_ready.go:82] duration metric: took 6.437717ms for pod "coredns-7c65d6cfc9-qvtlr" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:17.808127   83900 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:16.350842   84547 main.go:141] libmachine: (old-k8s-version-720064) Waiting to get IP...
	I1209 23:59:16.351883   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.352326   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.352393   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.352304   85519 retry.go:31] will retry after 268.149849ms: waiting for machine to come up
	I1209 23:59:16.621742   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.622116   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.622145   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.622066   85519 retry.go:31] will retry after 365.051996ms: waiting for machine to come up
	I1209 23:59:16.988590   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:16.989124   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:16.989154   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:16.989089   85519 retry.go:31] will retry after 441.697933ms: waiting for machine to come up
	I1209 23:59:17.432962   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:17.433453   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:17.433482   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:17.433356   85519 retry.go:31] will retry after 503.173846ms: waiting for machine to come up
	I1209 23:59:17.938107   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:17.938576   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:17.938610   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:17.938533   85519 retry.go:31] will retry after 476.993358ms: waiting for machine to come up
	I1209 23:59:18.417462   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:18.418037   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:18.418064   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:18.417989   85519 retry.go:31] will retry after 732.449849ms: waiting for machine to come up
	I1209 23:59:19.152120   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:19.152680   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:19.152708   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:19.152624   85519 retry.go:31] will retry after 764.17794ms: waiting for machine to come up
	I1209 23:59:19.918630   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:19.919113   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:19.919141   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:19.919066   85519 retry.go:31] will retry after 1.072352346s: waiting for machine to come up
	I1209 23:59:17.907764   84259 crio.go:462] duration metric: took 1.338836821s to copy over tarball
	I1209 23:59:17.907846   84259 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:59:20.133171   84259 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.22529043s)
	I1209 23:59:20.133214   84259 crio.go:469] duration metric: took 2.225420822s to extract the tarball
	I1209 23:59:20.133224   84259 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:59:20.169786   84259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:20.211990   84259 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:59:20.212020   84259 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:59:20.212030   84259 kubeadm.go:934] updating node { 192.168.72.54 8444 v1.31.2 crio true true} ...
	I1209 23:59:20.212166   84259 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-871210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-871210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:59:20.212247   84259 ssh_runner.go:195] Run: crio config
	I1209 23:59:20.255141   84259 cni.go:84] Creating CNI manager for ""
	I1209 23:59:20.255169   84259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:20.255182   84259 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:59:20.255215   84259 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.54 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-871210 NodeName:default-k8s-diff-port-871210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:59:20.255338   84259 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.54
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-871210"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.54"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.54"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:59:20.255407   84259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:59:20.265160   84259 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:59:20.265244   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:59:20.275799   84259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1209 23:59:20.292089   84259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:59:20.308054   84259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1209 23:59:20.327464   84259 ssh_runner.go:195] Run: grep 192.168.72.54	control-plane.minikube.internal$ /etc/hosts
	I1209 23:59:20.331101   84259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:20.343232   84259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:20.473225   84259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:20.492069   84259 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210 for IP: 192.168.72.54
	I1209 23:59:20.492098   84259 certs.go:194] generating shared ca certs ...
	I1209 23:59:20.492119   84259 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:20.492311   84259 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:59:20.492363   84259 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:59:20.492378   84259 certs.go:256] generating profile certs ...
	I1209 23:59:20.492499   84259 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/client.key
	I1209 23:59:20.492605   84259 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/apiserver.key.9cc284f2
	I1209 23:59:20.492663   84259 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/proxy-client.key
	I1209 23:59:20.492831   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:59:20.492872   84259 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:59:20.492886   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:59:20.492918   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:59:20.492951   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:59:20.492997   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:59:20.493071   84259 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:20.494023   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:59:20.543769   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:59:20.583697   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:59:20.615170   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:59:20.638784   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1209 23:59:20.673708   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 23:59:20.696690   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:59:20.719682   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/default-k8s-diff-port-871210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:59:20.744292   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:59:20.767643   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:59:20.790927   84259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:59:20.816746   84259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:59:20.834150   84259 ssh_runner.go:195] Run: openssl version
	I1209 23:59:20.840153   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:59:20.851006   84259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:59:20.855430   84259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:59:20.855492   84259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:59:20.860866   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:59:20.871983   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:59:20.882642   84259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:20.886978   84259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:20.887050   84259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:20.892472   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:59:20.902959   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:59:20.913774   84259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:59:20.918344   84259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:59:20.918394   84259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:59:20.923981   84259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:59:20.934931   84259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:59:20.939662   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:59:20.945633   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:59:20.951650   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:59:20.957628   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:59:20.963600   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:59:20.969399   84259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:59:20.974992   84259 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-871210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-871210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:20.975088   84259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:59:20.975143   84259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:21.012553   84259 cri.go:89] found id: ""
	I1209 23:59:21.012647   84259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:59:21.023728   84259 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:59:21.023758   84259 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:59:21.023808   84259 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:59:21.033788   84259 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:59:21.034806   84259 kubeconfig.go:125] found "default-k8s-diff-port-871210" server: "https://192.168.72.54:8444"
	I1209 23:59:21.036852   84259 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:59:21.046251   84259 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.54
	I1209 23:59:21.046280   84259 kubeadm.go:1160] stopping kube-system containers ...
	I1209 23:59:21.046292   84259 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 23:59:21.046354   84259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:21.084915   84259 cri.go:89] found id: ""
	I1209 23:59:21.085000   84259 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 23:59:21.100592   84259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:59:21.111180   84259 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:59:21.111204   84259 kubeadm.go:157] found existing configuration files:
	
	I1209 23:59:21.111254   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1209 23:59:21.120027   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:59:21.120087   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:59:21.129165   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1209 23:59:21.138254   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:59:21.138315   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:59:21.147276   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1209 23:59:21.155635   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:59:21.155711   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:59:21.164794   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1209 23:59:21.173421   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:59:21.173477   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:59:21.182615   84259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:59:21.191795   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:21.289295   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.415644   84259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.126305379s)
	I1209 23:59:22.415695   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.616211   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.672571   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:22.731282   84259 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:59:22.731363   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:19.816966   83900 pod_ready.go:103] pod "etcd-embed-certs-825613" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:21.814625   83900 pod_ready.go:93] pod "etcd-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.814650   83900 pod_ready.go:82] duration metric: took 4.006513871s for pod "etcd-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.814662   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.819611   83900 pod_ready.go:93] pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.819634   83900 pod_ready.go:82] duration metric: took 4.964081ms for pod "kube-apiserver-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.819647   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.823994   83900 pod_ready.go:93] pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.824016   83900 pod_ready.go:82] duration metric: took 4.360902ms for pod "kube-controller-manager-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.824028   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.829102   83900 pod_ready.go:93] pod "kube-proxy-rn6fg" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:21.829128   83900 pod_ready.go:82] duration metric: took 5.090595ms for pod "kube-proxy-rn6fg" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:21.829161   83900 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:23.983548   83900 pod_ready.go:93] pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:23.983603   83900 pod_ready.go:82] duration metric: took 2.154427639s for pod "kube-scheduler-embed-certs-825613" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:23.983620   83900 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:20.992747   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:20.993138   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:20.993196   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:20.993111   85519 retry.go:31] will retry after 1.479639772s: waiting for machine to come up
	I1209 23:59:22.474889   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:22.475414   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:22.475445   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:22.475364   85519 retry.go:31] will retry after 2.248463233s: waiting for machine to come up
	I1209 23:59:24.725307   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:24.725765   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:24.725796   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:24.725706   85519 retry.go:31] will retry after 2.803512929s: waiting for machine to come up
	I1209 23:59:23.231575   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:23.731704   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:24.231740   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:24.265107   84259 api_server.go:72] duration metric: took 1.533824471s to wait for apiserver process to appear ...
	I1209 23:59:24.265144   84259 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:59:24.265173   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:24.265664   84259 api_server.go:269] stopped: https://192.168.72.54:8444/healthz: Get "https://192.168.72.54:8444/healthz": dial tcp 192.168.72.54:8444: connect: connection refused
	I1209 23:59:24.765571   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.251990   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:27.252018   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:27.252033   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.284733   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 23:59:27.284769   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 23:59:27.284781   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.313967   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:27.313998   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:25.990720   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:28.490332   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:27.530567   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:27.531086   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:27.531120   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:27.531022   85519 retry.go:31] will retry after 2.800227475s: waiting for machine to come up
	I1209 23:59:30.333201   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:30.333660   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | unable to find current IP address of domain old-k8s-version-720064 in network mk-old-k8s-version-720064
	I1209 23:59:30.333684   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | I1209 23:59:30.333621   85519 retry.go:31] will retry after 3.041733113s: waiting for machine to come up
	I1209 23:59:27.765806   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:27.773528   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:27.773564   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:28.266012   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:28.275940   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:28.275965   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:28.765502   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:28.770695   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:28.770725   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:29.265309   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:29.270844   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:29.270883   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:29.765323   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:29.769949   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:29.769974   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:30.265981   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:30.270284   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 23:59:30.270313   84259 api_server.go:103] status: https://192.168.72.54:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 23:59:30.765915   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1209 23:59:30.771542   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I1209 23:59:30.781357   84259 api_server.go:141] control plane version: v1.31.2
	I1209 23:59:30.781390   84259 api_server.go:131] duration metric: took 6.516238077s to wait for apiserver health ...
	I1209 23:59:30.781400   84259 cni.go:84] Creating CNI manager for ""
	I1209 23:59:30.781409   84259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:30.783438   84259 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 23:59:30.784794   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 23:59:30.794916   84259 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 23:59:30.812099   84259 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:59:30.822915   84259 system_pods.go:59] 8 kube-system pods found
	I1209 23:59:30.822967   84259 system_pods.go:61] "coredns-7c65d6cfc9-wclgl" [22b2e24e-5a03-4d4f-a071-68e414aaf6cf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 23:59:30.822980   84259 system_pods.go:61] "etcd-default-k8s-diff-port-871210" [2058a33f-dd4b-42bc-94d9-4cd130b9389c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 23:59:30.822990   84259 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-871210" [26ffd01d-48b5-4e38-a91b-bf75dd75e3c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 23:59:30.823000   84259 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-871210" [cea3a810-c7e9-48d6-9fd7-1f6258c45387] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 23:59:30.823006   84259 system_pods.go:61] "kube-proxy-d7sxm" [e28ccb5f-a282-4371-ae1a-fca52dd58616] Running
	I1209 23:59:30.823021   84259 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-871210" [790619f6-29a2-4bbc-89ba-a7b93653459b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 23:59:30.823033   84259 system_pods.go:61] "metrics-server-6867b74b74-lgzdz" [2c251249-58b6-44c4-929a-0f5c963d83b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 23:59:30.823039   84259 system_pods.go:61] "storage-provisioner" [63078ee5-81b8-4548-88bf-2b89857ea01a] Running
	I1209 23:59:30.823050   84259 system_pods.go:74] duration metric: took 10.914419ms to wait for pod list to return data ...
	I1209 23:59:30.823063   84259 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:59:30.828012   84259 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 23:59:30.828062   84259 node_conditions.go:123] node cpu capacity is 2
	I1209 23:59:30.828076   84259 node_conditions.go:105] duration metric: took 5.004481ms to run NodePressure ...
	I1209 23:59:30.828097   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:31.092839   84259 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 23:59:31.097588   84259 kubeadm.go:739] kubelet initialised
	I1209 23:59:31.097613   84259 kubeadm.go:740] duration metric: took 4.744115ms waiting for restarted kubelet to initialise ...
	I1209 23:59:31.097623   84259 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:59:31.104318   84259 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:30.991511   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:33.490390   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:34.824186   83859 start.go:364] duration metric: took 55.338760885s to acquireMachinesLock for "no-preload-048296"
	I1209 23:59:34.824245   83859 start.go:96] Skipping create...Using existing machine configuration
	I1209 23:59:34.824272   83859 fix.go:54] fixHost starting: 
	I1209 23:59:34.824660   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1209 23:59:34.824713   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:59:34.842421   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35591
	I1209 23:59:34.842851   83859 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:59:34.843297   83859 main.go:141] libmachine: Using API Version  1
	I1209 23:59:34.843319   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:59:34.843701   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:59:34.843916   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:34.844066   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1209 23:59:34.845768   83859 fix.go:112] recreateIfNeeded on no-preload-048296: state=Stopped err=<nil>
	I1209 23:59:34.845792   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	W1209 23:59:34.845958   83859 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 23:59:34.847617   83859 out.go:177] * Restarting existing kvm2 VM for "no-preload-048296" ...
	I1209 23:59:33.378848   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.379297   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has current primary IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.379326   84547 main.go:141] libmachine: (old-k8s-version-720064) Found IP for machine: 192.168.39.188
	I1209 23:59:33.379351   84547 main.go:141] libmachine: (old-k8s-version-720064) Reserving static IP address...
	I1209 23:59:33.379701   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "old-k8s-version-720064", mac: "52:54:00:a1:91:00", ip: "192.168.39.188"} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.379722   84547 main.go:141] libmachine: (old-k8s-version-720064) Reserved static IP address: 192.168.39.188
	I1209 23:59:33.379743   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | skip adding static IP to network mk-old-k8s-version-720064 - found existing host DHCP lease matching {name: "old-k8s-version-720064", mac: "52:54:00:a1:91:00", ip: "192.168.39.188"}
	I1209 23:59:33.379759   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Getting to WaitForSSH function...
	I1209 23:59:33.379776   84547 main.go:141] libmachine: (old-k8s-version-720064) Waiting for SSH to be available...
	I1209 23:59:33.381697   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.381990   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.382027   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.382137   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Using SSH client type: external
	I1209 23:59:33.382163   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa (-rw-------)
	I1209 23:59:33.382193   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:59:33.382212   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | About to run SSH command:
	I1209 23:59:33.382229   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | exit 0
	I1209 23:59:33.507653   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | SSH cmd err, output: <nil>: 
	I1209 23:59:33.507957   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetConfigRaw
	I1209 23:59:33.508555   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:33.511020   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.511399   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.511431   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.511695   84547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/config.json ...
	I1209 23:59:33.511932   84547 machine.go:93] provisionDockerMachine start ...
	I1209 23:59:33.511958   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:33.512144   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.514383   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.514722   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.514743   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.514876   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.515048   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.515187   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.515349   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.515596   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.515835   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.515850   84547 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:59:33.623370   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:59:33.623407   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.623670   84547 buildroot.go:166] provisioning hostname "old-k8s-version-720064"
	I1209 23:59:33.623708   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.623909   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.626370   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.626749   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.626781   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.626943   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.627172   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.627348   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.627490   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.627666   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.627868   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.627884   84547 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-720064 && echo "old-k8s-version-720064" | sudo tee /etc/hostname
	I1209 23:59:33.749338   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-720064
	
	I1209 23:59:33.749370   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.752401   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.752774   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.752807   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.753000   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:33.753180   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.753372   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:33.753531   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:33.753674   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:33.753837   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:33.753853   84547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-720064' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-720064/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-720064' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:59:33.872512   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:59:33.872548   84547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:59:33.872575   84547 buildroot.go:174] setting up certificates
	I1209 23:59:33.872585   84547 provision.go:84] configureAuth start
	I1209 23:59:33.872597   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetMachineName
	I1209 23:59:33.872906   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:33.875719   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.876012   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.876051   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.876210   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:33.878539   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.878857   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:33.878894   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:33.879025   84547 provision.go:143] copyHostCerts
	I1209 23:59:33.879087   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:59:33.879100   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:59:33.879166   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:59:33.879254   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:59:33.879262   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:59:33.879283   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:59:33.879338   84547 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:59:33.879346   84547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:59:33.879365   84547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:59:33.879414   84547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-720064 san=[127.0.0.1 192.168.39.188 localhost minikube old-k8s-version-720064]
	I1209 23:59:34.211417   84547 provision.go:177] copyRemoteCerts
	I1209 23:59:34.211475   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:59:34.211504   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.214285   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.214649   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.214686   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.214789   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.215006   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.215180   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.215345   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.301399   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 23:59:34.326216   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:59:34.349136   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 23:59:34.371597   84547 provision.go:87] duration metric: took 498.999263ms to configureAuth
	I1209 23:59:34.371633   84547 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:59:34.371832   84547 config.go:182] Loaded profile config "old-k8s-version-720064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 23:59:34.371902   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.374649   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.374985   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.375031   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.375161   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.375368   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.375541   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.375735   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.375897   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:34.376128   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:34.376152   84547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:59:34.588357   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:59:34.588391   84547 machine.go:96] duration metric: took 1.076442399s to provisionDockerMachine
	I1209 23:59:34.588402   84547 start.go:293] postStartSetup for "old-k8s-version-720064" (driver="kvm2")
	I1209 23:59:34.588412   84547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:59:34.588443   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.588749   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:59:34.588772   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.591413   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.591805   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.591836   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.592001   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.592176   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.592325   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.592431   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.673445   84547 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:59:34.677394   84547 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:59:34.677421   84547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:59:34.677498   84547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:59:34.677600   84547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:59:34.677716   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:59:34.686261   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:34.709412   84547 start.go:296] duration metric: took 120.993275ms for postStartSetup
	I1209 23:59:34.709467   84547 fix.go:56] duration metric: took 19.749026723s for fixHost
	I1209 23:59:34.709495   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.712458   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.712762   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.712796   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.712930   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.713158   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.713335   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.713506   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.713690   84547 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:34.713852   84547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1209 23:59:34.713862   84547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:59:34.823993   84547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788774.778513264
	
	I1209 23:59:34.824022   84547 fix.go:216] guest clock: 1733788774.778513264
	I1209 23:59:34.824034   84547 fix.go:229] Guest: 2024-12-09 23:59:34.778513264 +0000 UTC Remote: 2024-12-09 23:59:34.709473288 +0000 UTC m=+249.236536627 (delta=69.039976ms)
	I1209 23:59:34.824067   84547 fix.go:200] guest clock delta is within tolerance: 69.039976ms
	I1209 23:59:34.824075   84547 start.go:83] releasing machines lock for "old-k8s-version-720064", held for 19.863672525s
	I1209 23:59:34.824110   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.824391   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:34.827165   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.827596   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.827629   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.827894   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828467   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828690   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .DriverName
	I1209 23:59:34.828802   84547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:59:34.828865   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.828888   84547 ssh_runner.go:195] Run: cat /version.json
	I1209 23:59:34.828917   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHHostname
	I1209 23:59:34.831978   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832052   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832514   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.832548   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:34.832572   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832748   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:34.832751   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.832899   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHPort
	I1209 23:59:34.832982   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.832986   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHKeyPath
	I1209 23:59:34.833198   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.833217   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetSSHUsername
	I1209 23:59:34.833370   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.833374   84547 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/old-k8s-version-720064/id_rsa Username:docker}
	I1209 23:59:34.949044   84547 ssh_runner.go:195] Run: systemctl --version
	I1209 23:59:34.954999   84547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:59:35.100096   84547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:59:35.105764   84547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:59:35.105852   84547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:59:35.126893   84547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:59:35.126915   84547 start.go:495] detecting cgroup driver to use...
	I1209 23:59:35.126968   84547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:59:35.143236   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:59:35.157019   84547 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:59:35.157098   84547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:59:35.170608   84547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:59:35.184816   84547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:59:35.310043   84547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:59:35.486472   84547 docker.go:233] disabling docker service ...
	I1209 23:59:35.486653   84547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:59:35.501938   84547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:59:35.516997   84547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:59:35.667304   84547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:59:35.821316   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:59:35.836423   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:59:35.854633   84547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1209 23:59:35.854719   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.865592   84547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:59:35.865677   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.877247   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.889007   84547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:35.900196   84547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:59:35.912403   84547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:59:35.922919   84547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:59:35.922987   84547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:59:35.939439   84547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:59:35.949715   84547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:36.100115   84547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:59:36.207129   84547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:59:36.207206   84547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:59:36.213672   84547 start.go:563] Will wait 60s for crictl version
	I1209 23:59:36.213758   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:36.218204   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:59:36.255262   84547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:59:36.255367   84547 ssh_runner.go:195] Run: crio --version
	I1209 23:59:36.286277   84547 ssh_runner.go:195] Run: crio --version
	I1209 23:59:36.317334   84547 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1209 23:59:34.848733   83859 main.go:141] libmachine: (no-preload-048296) Calling .Start
	I1209 23:59:34.848943   83859 main.go:141] libmachine: (no-preload-048296) Ensuring networks are active...
	I1209 23:59:34.849641   83859 main.go:141] libmachine: (no-preload-048296) Ensuring network default is active
	I1209 23:59:34.850039   83859 main.go:141] libmachine: (no-preload-048296) Ensuring network mk-no-preload-048296 is active
	I1209 23:59:34.850540   83859 main.go:141] libmachine: (no-preload-048296) Getting domain xml...
	I1209 23:59:34.851224   83859 main.go:141] libmachine: (no-preload-048296) Creating domain...
	I1209 23:59:36.270761   83859 main.go:141] libmachine: (no-preload-048296) Waiting to get IP...
	I1209 23:59:36.271970   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:36.272578   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:36.272645   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:36.272536   85657 retry.go:31] will retry after 253.181092ms: waiting for machine to come up
	I1209 23:59:36.527128   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:36.527735   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:36.527762   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:36.527683   85657 retry.go:31] will retry after 297.608817ms: waiting for machine to come up
	I1209 23:59:36.827372   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:36.828819   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:36.828849   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:36.828763   85657 retry.go:31] will retry after 374.112777ms: waiting for machine to come up
	I1209 23:59:33.112105   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:35.611353   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:36.115472   84259 pod_ready.go:93] pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:36.115504   84259 pod_ready.go:82] duration metric: took 5.011153637s for pod "coredns-7c65d6cfc9-wclgl" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:36.115521   84259 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:35.492415   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:37.992287   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:36.318651   84547 main.go:141] libmachine: (old-k8s-version-720064) Calling .GetIP
	I1209 23:59:36.321927   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:36.322336   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:91:00", ip: ""} in network mk-old-k8s-version-720064: {Iface:virbr1 ExpiryTime:2024-12-10 00:59:26 +0000 UTC Type:0 Mac:52:54:00:a1:91:00 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:old-k8s-version-720064 Clientid:01:52:54:00:a1:91:00}
	I1209 23:59:36.322366   84547 main.go:141] libmachine: (old-k8s-version-720064) DBG | domain old-k8s-version-720064 has defined IP address 192.168.39.188 and MAC address 52:54:00:a1:91:00 in network mk-old-k8s-version-720064
	I1209 23:59:36.322619   84547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 23:59:36.327938   84547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:36.344152   84547 kubeadm.go:883] updating cluster {Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:59:36.344279   84547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:59:36.344328   84547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:36.391261   84547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 23:59:36.391348   84547 ssh_runner.go:195] Run: which lz4
	I1209 23:59:36.395391   84547 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:59:36.399585   84547 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:59:36.399622   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1209 23:59:37.969251   84547 crio.go:462] duration metric: took 1.573891158s to copy over tarball
	I1209 23:59:37.969362   84547 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:59:37.204687   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:37.205458   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:37.205479   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:37.205372   85657 retry.go:31] will retry after 420.490569ms: waiting for machine to come up
	I1209 23:59:37.627167   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:37.629813   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:37.629839   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:37.629687   85657 retry.go:31] will retry after 561.795207ms: waiting for machine to come up
	I1209 23:59:38.193652   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:38.194178   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:38.194200   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:38.194119   85657 retry.go:31] will retry after 925.018936ms: waiting for machine to come up
	I1209 23:59:39.121476   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:39.122014   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:39.122046   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:39.121958   85657 retry.go:31] will retry after 984.547761ms: waiting for machine to come up
	I1209 23:59:40.108478   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:40.108947   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:40.108981   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:40.108925   85657 retry.go:31] will retry after 1.261498912s: waiting for machine to come up
	I1209 23:59:41.372403   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:41.372901   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:41.372922   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:41.372884   85657 retry.go:31] will retry after 1.498283156s: waiting for machine to come up
	I1209 23:59:38.122964   84259 pod_ready.go:93] pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:38.122990   84259 pod_ready.go:82] duration metric: took 2.007459606s for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:38.123004   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:40.133257   84259 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:40.631911   84259 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:40.631940   84259 pod_ready.go:82] duration metric: took 2.508926956s for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:40.631957   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:40.492044   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:42.991179   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:41.050494   84547 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081073233s)
	I1209 23:59:41.050524   84547 crio.go:469] duration metric: took 3.081237568s to extract the tarball
	I1209 23:59:41.050533   84547 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:59:41.092694   84547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:41.125264   84547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 23:59:41.125289   84547 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 23:59:41.125360   84547 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.125378   84547 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.125388   84547 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.125360   84547 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.125436   84547 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1209 23:59:41.125466   84547 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.125497   84547 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.125515   84547 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.127285   84547 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.127325   84547 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1209 23:59:41.127376   84547 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.127405   84547 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.127421   84547 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.127293   84547 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.127293   84547 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.127300   84547 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.302277   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.314265   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1209 23:59:41.323683   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.326327   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.345042   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.345550   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.362324   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.367612   84547 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1209 23:59:41.367663   84547 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.367706   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.406644   84547 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1209 23:59:41.406691   84547 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1209 23:59:41.406783   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.450490   84547 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1209 23:59:41.450550   84547 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.450601   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.477332   84547 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1209 23:59:41.477389   84547 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.477438   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.499714   84547 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1209 23:59:41.499757   84547 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.499783   84547 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1209 23:59:41.499806   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.499818   84547 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.499861   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.505128   84547 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1209 23:59:41.505179   84547 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.505205   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.505249   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.505212   84547 ssh_runner.go:195] Run: which crictl
	I1209 23:59:41.505306   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.505342   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.507860   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.507923   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.643108   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.644251   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.644273   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.644358   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.644400   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.650732   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.650817   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.777981   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 23:59:41.789388   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 23:59:41.789435   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 23:59:41.789491   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 23:59:41.789561   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.789592   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 23:59:41.802784   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 23:59:41.912370   84547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:41.914494   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1209 23:59:41.944460   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1209 23:59:41.944564   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1209 23:59:41.944569   84547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 23:59:41.944660   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1209 23:59:41.944662   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1209 23:59:41.944726   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1209 23:59:42.104003   84547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1209 23:59:42.104074   84547 cache_images.go:92] duration metric: took 978.7734ms to LoadCachedImages
	W1209 23:59:42.104176   84547 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1209 23:59:42.104198   84547 kubeadm.go:934] updating node { 192.168.39.188 8443 v1.20.0 crio true true} ...
	I1209 23:59:42.104326   84547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-720064 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.188
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:59:42.104431   84547 ssh_runner.go:195] Run: crio config
	I1209 23:59:42.154016   84547 cni.go:84] Creating CNI manager for ""
	I1209 23:59:42.154041   84547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:59:42.154050   84547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:59:42.154066   84547 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.188 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-720064 NodeName:old-k8s-version-720064 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.188"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.188 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 23:59:42.154226   84547 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.188
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-720064"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.188
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.188"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:59:42.154292   84547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 23:59:42.164866   84547 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:59:42.164943   84547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:59:42.175137   84547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1209 23:59:42.192176   84547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:59:42.210695   84547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1209 23:59:42.230364   84547 ssh_runner.go:195] Run: grep 192.168.39.188	control-plane.minikube.internal$ /etc/hosts
	I1209 23:59:42.234239   84547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.188	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:42.246747   84547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:42.379643   84547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:59:42.396626   84547 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064 for IP: 192.168.39.188
	I1209 23:59:42.396713   84547 certs.go:194] generating shared ca certs ...
	I1209 23:59:42.396746   84547 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:42.396942   84547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1209 23:59:42.397007   84547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1209 23:59:42.397023   84547 certs.go:256] generating profile certs ...
	I1209 23:59:42.397191   84547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/client.key
	I1209 23:59:42.397270   84547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key.abcbe3fa
	I1209 23:59:42.397330   84547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.key
	I1209 23:59:42.397516   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1209 23:59:42.397564   84547 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1209 23:59:42.397580   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:59:42.397623   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1209 23:59:42.397662   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:59:42.397701   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1209 23:59:42.397768   84547 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:42.398726   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:59:42.450053   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 23:59:42.475685   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:59:42.514396   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:59:42.562383   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 23:59:42.589690   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 23:59:42.614149   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:59:42.647112   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/old-k8s-version-720064/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:59:42.671117   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:59:42.694303   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1209 23:59:42.722563   84547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1209 23:59:42.753478   84547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:59:42.773673   84547 ssh_runner.go:195] Run: openssl version
	I1209 23:59:42.779930   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:59:42.791531   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.796422   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.796536   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:59:42.802386   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:59:42.817058   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1209 23:59:42.828570   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.833292   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.833357   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1209 23:59:42.839679   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1209 23:59:42.850729   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1209 23:59:42.861699   84547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.866417   84547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.866479   84547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1209 23:59:42.873602   84547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 23:59:42.884398   84547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:59:42.889137   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 23:59:42.896108   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 23:59:42.902584   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 23:59:42.908706   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 23:59:42.914313   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 23:59:42.919943   84547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 23:59:42.925417   84547 kubeadm.go:392] StartCluster: {Name:old-k8s-version-720064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-720064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:59:42.925546   84547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:59:42.925602   84547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:42.962948   84547 cri.go:89] found id: ""
	I1209 23:59:42.963030   84547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:59:42.973746   84547 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 23:59:42.973768   84547 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 23:59:42.973819   84547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 23:59:42.983593   84547 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:59:42.984528   84547 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-720064" does not appear in /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:59:42.985106   84547 kubeconfig.go:62] /home/jenkins/minikube-integration/19888-18950/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-720064" cluster setting kubeconfig missing "old-k8s-version-720064" context setting]
	I1209 23:59:42.985981   84547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:59:42.996842   84547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 23:59:43.007858   84547 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.188
	I1209 23:59:43.007901   84547 kubeadm.go:1160] stopping kube-system containers ...
	I1209 23:59:43.007916   84547 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 23:59:43.007982   84547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:59:43.046416   84547 cri.go:89] found id: ""
	I1209 23:59:43.046495   84547 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 23:59:43.063226   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:59:43.073919   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:59:43.073946   84547 kubeadm.go:157] found existing configuration files:
	
	I1209 23:59:43.073991   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:59:43.084217   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:59:43.084292   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:59:43.094196   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:59:43.104098   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:59:43.104178   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:59:43.115056   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:59:43.124696   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:59:43.124761   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:59:43.135180   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:59:43.146325   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:59:43.146392   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:59:43.156017   84547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:59:43.166584   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:43.376941   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.198180   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.431002   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.549010   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 23:59:44.644736   84547 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:59:44.644827   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:45.145713   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:42.872724   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:42.873229   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:42.873260   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:42.873178   85657 retry.go:31] will retry after 1.469240763s: waiting for machine to come up
	I1209 23:59:44.344825   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:44.345339   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:44.345374   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:44.345317   85657 retry.go:31] will retry after 1.85673524s: waiting for machine to come up
	I1209 23:59:46.203693   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:46.204128   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:46.204152   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:46.204065   85657 retry.go:31] will retry after 3.5180616s: waiting for machine to come up
	I1209 23:59:43.003893   84259 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:43.734778   84259 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:43.734806   84259 pod_ready.go:82] duration metric: took 3.102839103s for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:43.734821   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-d7sxm" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:43.743393   84259 pod_ready.go:93] pod "kube-proxy-d7sxm" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:43.743421   84259 pod_ready.go:82] duration metric: took 8.592191ms for pod "kube-proxy-d7sxm" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:43.743435   84259 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:44.250028   84259 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1209 23:59:44.250051   84259 pod_ready.go:82] duration metric: took 506.607784ms for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:44.250063   84259 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace to be "Ready" ...
	I1209 23:59:46.256096   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:45.494189   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:47.993074   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:45.645297   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:46.145300   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:46.644880   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:47.144955   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:47.645081   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:48.145132   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:48.645278   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.144932   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.645874   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:50.145061   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:49.725528   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:49.726040   83859 main.go:141] libmachine: (no-preload-048296) DBG | unable to find current IP address of domain no-preload-048296 in network mk-no-preload-048296
	I1209 23:59:49.726069   83859 main.go:141] libmachine: (no-preload-048296) DBG | I1209 23:59:49.725982   85657 retry.go:31] will retry after 3.98915487s: waiting for machine to come up
	I1209 23:59:48.256397   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:50.757098   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:50.491110   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:52.989722   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:50.645171   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:51.144908   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:51.645198   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:52.145700   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:52.645665   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.145782   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.645678   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:54.145521   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:54.645825   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:55.145578   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:53.718634   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.719141   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has current primary IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.719163   83859 main.go:141] libmachine: (no-preload-048296) Found IP for machine: 192.168.61.182
	I1209 23:59:53.719173   83859 main.go:141] libmachine: (no-preload-048296) Reserving static IP address...
	I1209 23:59:53.719594   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "no-preload-048296", mac: "52:54:00:c6:cf:c7", ip: "192.168.61.182"} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.719621   83859 main.go:141] libmachine: (no-preload-048296) DBG | skip adding static IP to network mk-no-preload-048296 - found existing host DHCP lease matching {name: "no-preload-048296", mac: "52:54:00:c6:cf:c7", ip: "192.168.61.182"}
	I1209 23:59:53.719632   83859 main.go:141] libmachine: (no-preload-048296) Reserved static IP address: 192.168.61.182
	I1209 23:59:53.719641   83859 main.go:141] libmachine: (no-preload-048296) Waiting for SSH to be available...
	I1209 23:59:53.719671   83859 main.go:141] libmachine: (no-preload-048296) DBG | Getting to WaitForSSH function...
	I1209 23:59:53.722015   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.722309   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.722342   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.722445   83859 main.go:141] libmachine: (no-preload-048296) DBG | Using SSH client type: external
	I1209 23:59:53.722471   83859 main.go:141] libmachine: (no-preload-048296) DBG | Using SSH private key: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa (-rw-------)
	I1209 23:59:53.722504   83859 main.go:141] libmachine: (no-preload-048296) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:59:53.722517   83859 main.go:141] libmachine: (no-preload-048296) DBG | About to run SSH command:
	I1209 23:59:53.722529   83859 main.go:141] libmachine: (no-preload-048296) DBG | exit 0
	I1209 23:59:53.843261   83859 main.go:141] libmachine: (no-preload-048296) DBG | SSH cmd err, output: <nil>: 
	I1209 23:59:53.843673   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetConfigRaw
	I1209 23:59:53.844367   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:53.846889   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.847170   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.847203   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.847433   83859 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/config.json ...
	I1209 23:59:53.847656   83859 machine.go:93] provisionDockerMachine start ...
	I1209 23:59:53.847675   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:53.847834   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:53.849867   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.850179   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.850213   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.850372   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:53.850548   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.850702   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.850878   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:53.851058   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:53.851288   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:53.851303   83859 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:59:53.947889   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 23:59:53.947916   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:59:53.948220   83859 buildroot.go:166] provisioning hostname "no-preload-048296"
	I1209 23:59:53.948259   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:59:53.948496   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:53.951070   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.951479   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:53.951509   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:53.951729   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:53.951919   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.952120   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:53.952280   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:53.952456   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:53.952650   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:53.952672   83859 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-048296 && echo "no-preload-048296" | sudo tee /etc/hostname
	I1209 23:59:54.065706   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-048296
	
	I1209 23:59:54.065738   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.068629   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.068942   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.068975   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.069241   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.069493   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.069731   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.069939   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.070156   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:54.070325   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:54.070346   83859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-048296' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-048296/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-048296' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:59:54.180424   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:59:54.180460   83859 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19888-18950/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-18950/.minikube}
	I1209 23:59:54.180479   83859 buildroot.go:174] setting up certificates
	I1209 23:59:54.180490   83859 provision.go:84] configureAuth start
	I1209 23:59:54.180499   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetMachineName
	I1209 23:59:54.180802   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:54.183349   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.183703   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.183732   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.183852   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.186076   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.186341   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.186372   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.186498   83859 provision.go:143] copyHostCerts
	I1209 23:59:54.186554   83859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem, removing ...
	I1209 23:59:54.186564   83859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem
	I1209 23:59:54.186627   83859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/ca.pem (1082 bytes)
	I1209 23:59:54.186715   83859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem, removing ...
	I1209 23:59:54.186723   83859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem
	I1209 23:59:54.186744   83859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/cert.pem (1123 bytes)
	I1209 23:59:54.186805   83859 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem, removing ...
	I1209 23:59:54.186814   83859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem
	I1209 23:59:54.186831   83859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-18950/.minikube/key.pem (1679 bytes)
	I1209 23:59:54.186881   83859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem org=jenkins.no-preload-048296 san=[127.0.0.1 192.168.61.182 localhost minikube no-preload-048296]
	I1209 23:59:54.291230   83859 provision.go:177] copyRemoteCerts
	I1209 23:59:54.291296   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:59:54.291319   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.293968   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.294257   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.294283   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.294451   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.294654   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.294814   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.294963   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.377572   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 23:59:54.401222   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 23:59:54.425323   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 23:59:54.448358   83859 provision.go:87] duration metric: took 267.838318ms to configureAuth
	I1209 23:59:54.448387   83859 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:59:54.448563   83859 config.go:182] Loaded profile config "no-preload-048296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:59:54.448629   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.451082   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.451422   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.451450   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.451678   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.451906   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.452088   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.452222   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.452374   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:54.452550   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:54.452567   83859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:59:54.665629   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:59:54.665663   83859 machine.go:96] duration metric: took 817.991465ms to provisionDockerMachine
	I1209 23:59:54.665677   83859 start.go:293] postStartSetup for "no-preload-048296" (driver="kvm2")
	I1209 23:59:54.665690   83859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:59:54.665712   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.666016   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:59:54.666043   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.668993   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.669501   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.669532   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.669666   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.669859   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.670004   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.670160   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.752122   83859 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:59:54.756546   83859 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:59:54.756569   83859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/addons for local assets ...
	I1209 23:59:54.756640   83859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-18950/.minikube/files for local assets ...
	I1209 23:59:54.756706   83859 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem -> 262532.pem in /etc/ssl/certs
	I1209 23:59:54.756797   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 23:59:54.766465   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /etc/ssl/certs/262532.pem (1708 bytes)
	I1209 23:59:54.792214   83859 start.go:296] duration metric: took 126.521539ms for postStartSetup
	I1209 23:59:54.792263   83859 fix.go:56] duration metric: took 19.967992145s for fixHost
	I1209 23:59:54.792284   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.794921   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.795304   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.795334   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.795549   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.795807   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.795986   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.796154   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.796333   83859 main.go:141] libmachine: Using SSH client type: native
	I1209 23:59:54.796485   83859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1209 23:59:54.796495   83859 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:59:54.900382   83859 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733788794.870799171
	
	I1209 23:59:54.900413   83859 fix.go:216] guest clock: 1733788794.870799171
	I1209 23:59:54.900423   83859 fix.go:229] Guest: 2024-12-09 23:59:54.870799171 +0000 UTC Remote: 2024-12-09 23:59:54.792267927 +0000 UTC m=+357.892420200 (delta=78.531244ms)
	I1209 23:59:54.900443   83859 fix.go:200] guest clock delta is within tolerance: 78.531244ms
	I1209 23:59:54.900448   83859 start.go:83] releasing machines lock for "no-preload-048296", held for 20.076230285s
	I1209 23:59:54.900466   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.900763   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:54.903437   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.903762   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.903785   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.903941   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.904412   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.904583   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1209 23:59:54.904674   83859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:59:54.904735   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.904788   83859 ssh_runner.go:195] Run: cat /version.json
	I1209 23:59:54.904815   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1209 23:59:54.907540   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.907693   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.907936   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.907960   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.908083   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:54.908098   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.908108   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:54.908269   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1209 23:59:54.908332   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.908431   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1209 23:59:54.908578   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.908607   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1209 23:59:54.908750   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.908753   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1209 23:59:54.980588   83859 ssh_runner.go:195] Run: systemctl --version
	I1209 23:59:55.003079   83859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:59:55.152159   83859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:59:55.158212   83859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:59:55.158284   83859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:59:55.177510   83859 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:59:55.177539   83859 start.go:495] detecting cgroup driver to use...
	I1209 23:59:55.177616   83859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:59:55.194262   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:59:55.210699   83859 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:59:55.210770   83859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:59:55.226308   83859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:59:55.242175   83859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:59:55.361845   83859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:59:55.500415   83859 docker.go:233] disabling docker service ...
	I1209 23:59:55.500487   83859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:59:55.515689   83859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:59:55.528651   83859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:59:55.663341   83859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:59:55.776773   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:59:55.790155   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:59:55.807749   83859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:59:55.807807   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.817580   83859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:59:55.817644   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.827975   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.837871   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.848118   83859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:59:55.858322   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.867754   83859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.884626   83859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:59:55.894254   83859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:59:55.903128   83859 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:59:55.903187   83859 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:59:55.914887   83859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:59:55.924665   83859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:59:56.028206   83859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:59:56.117484   83859 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:59:56.117573   83859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:59:56.122345   83859 start.go:563] Will wait 60s for crictl version
	I1209 23:59:56.122401   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.126032   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:59:56.161884   83859 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:59:56.161978   83859 ssh_runner.go:195] Run: crio --version
	I1209 23:59:56.194758   83859 ssh_runner.go:195] Run: crio --version
	I1209 23:59:56.224190   83859 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:59:56.225560   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetIP
	I1209 23:59:56.228550   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:56.228928   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1209 23:59:56.228950   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1209 23:59:56.229163   83859 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1209 23:59:56.233615   83859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:59:56.245890   83859 kubeadm.go:883] updating cluster {Name:no-preload-048296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-048296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:59:56.246079   83859 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:59:56.246132   83859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:59:56.285601   83859 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:59:56.285629   83859 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 23:59:56.285694   83859 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:56.285734   83859 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.285761   83859 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.285813   83859 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.285858   83859 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.285900   83859 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.285810   83859 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1209 23:59:56.285983   83859 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.287414   83859 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1209 23:59:56.287436   83859 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.287436   83859 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.287446   83859 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.287465   83859 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.287500   83859 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.287550   83859 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:56.287550   83859 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.437713   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.453590   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.457708   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.468937   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1209 23:59:56.474766   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.477321   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.483791   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.530686   83859 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1209 23:59:56.530735   83859 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.530786   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.603275   83859 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1209 23:59:56.603327   83859 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.603376   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.612591   83859 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1209 23:59:56.612635   83859 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.612686   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726597   83859 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1209 23:59:56.726643   83859 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.726650   83859 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1209 23:59:56.726682   83859 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.726692   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726728   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726752   83859 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1209 23:59:56.726777   83859 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.726808   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:56.726824   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.726882   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.726889   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.795578   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.795624   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.795646   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.795581   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.795723   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.795847   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:56.905584   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 23:59:56.909282   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 23:59:56.927659   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:56.927832   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:56.927928   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:56.932487   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 23:59:53.256494   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:55.257888   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:54.991894   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:57.491500   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:55.645853   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.145844   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.644899   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:57.145431   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:57.645625   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:58.145096   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:58.645933   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:59.145181   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:59.645624   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:00.145874   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:59:56.971745   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1209 23:59:56.971844   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1209 23:59:56.987218   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1209 23:59:56.987380   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 23:59:57.050691   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 23:59:57.064778   83859 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:57.087868   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 23:59:57.087972   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 23:59:57.089176   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1209 23:59:57.089235   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1209 23:59:57.089262   83859 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1209 23:59:57.089269   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 23:59:57.089311   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1209 23:59:57.089355   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1209 23:59:57.098939   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1209 23:59:57.099044   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 23:59:57.148482   83859 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1209 23:59:57.148532   83859 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:59:57.148588   83859 ssh_runner.go:195] Run: which crictl
	I1209 23:59:57.173723   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1209 23:59:57.173810   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 23:59:57.186640   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1209 23:59:57.186734   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1210 00:00:00.723670   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.634331388s)
	I1210 00:00:00.723704   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1210 00:00:00.723717   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (3.634425647s)
	I1210 00:00:00.723751   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1210 00:00:00.723728   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 00:00:00.723767   83859 ssh_runner.go:235] Completed: which crictl: (3.575163822s)
	I1210 00:00:00.723815   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:00:00.723857   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (3.550033094s)
	I1210 00:00:00.723816   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 00:00:00.723909   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (3.537159613s)
	I1210 00:00:00.723934   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1210 00:00:00.723854   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (3.624679414s)
	I1210 00:00:00.723974   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1210 00:00:00.723880   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1209 23:59:57.756271   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:00.256451   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:02.256956   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:59:59.491859   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:01.992860   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:00.644886   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:01.145063   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:01.645932   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.145743   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.645396   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:03.145007   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:03.645927   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:04.145876   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:04.645825   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:05.145906   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:02.713826   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.989917623s)
	I1210 00:00:02.713866   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1210 00:00:02.713883   83859 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.99004494s)
	I1210 00:00:02.713896   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 00:00:02.713949   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:00:02.713952   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 00:00:02.753832   83859 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:00:04.780306   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.066273178s)
	I1210 00:00:04.780344   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1210 00:00:04.780368   83859 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1210 00:00:04.780432   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1210 00:00:04.780430   83859 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.026560705s)
	I1210 00:00:04.780544   83859 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 00:00:04.780618   83859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:00:06.756731   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.976269529s)
	I1210 00:00:06.756763   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1210 00:00:06.756774   83859 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.976141757s)
	I1210 00:00:06.756793   83859 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1210 00:00:06.756791   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 00:00:06.756846   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 00:00:04.258390   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:06.756422   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:04.490429   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:06.991054   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:08.991247   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:05.644919   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:06.145142   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:06.645240   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:07.144864   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:07.645218   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.145246   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.645965   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:09.145663   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:09.645731   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:10.145638   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:08.712114   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.955245193s)
	I1210 00:00:08.712142   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1210 00:00:08.712166   83859 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 00:00:08.712212   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 00:00:10.892294   83859 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.180058873s)
	I1210 00:00:10.892319   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1210 00:00:10.892352   83859 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:00:10.892391   83859 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 00:00:11.542174   83859 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19888-18950/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 00:00:11.542214   83859 cache_images.go:123] Successfully loaded all cached images
	I1210 00:00:11.542219   83859 cache_images.go:92] duration metric: took 15.256564286s to LoadCachedImages
	I1210 00:00:11.542230   83859 kubeadm.go:934] updating node { 192.168.61.182 8443 v1.31.2 crio true true} ...
	I1210 00:00:11.542334   83859 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-048296 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-048296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:00:11.542397   83859 ssh_runner.go:195] Run: crio config
	I1210 00:00:11.586768   83859 cni.go:84] Creating CNI manager for ""
	I1210 00:00:11.586787   83859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:00:11.586795   83859 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:00:11.586817   83859 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.182 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-048296 NodeName:no-preload-048296 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:00:11.586932   83859 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-048296"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.182"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.182"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:00:11.586992   83859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:00:11.597936   83859 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:00:11.597996   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 00:00:11.608068   83859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 00:00:11.624881   83859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:00:11.640934   83859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1210 00:00:11.658113   83859 ssh_runner.go:195] Run: grep 192.168.61.182	control-plane.minikube.internal$ /etc/hosts
	I1210 00:00:11.662360   83859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:00:11.674732   83859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:00:11.803743   83859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:00:11.821169   83859 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296 for IP: 192.168.61.182
	I1210 00:00:11.821191   83859 certs.go:194] generating shared ca certs ...
	I1210 00:00:11.821211   83859 certs.go:226] acquiring lock for ca certs: {Name:mkd265720ad4e9ec56deaf6f6ee2e43eb34d5a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:00:11.821404   83859 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key
	I1210 00:00:11.821485   83859 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key
	I1210 00:00:11.821502   83859 certs.go:256] generating profile certs ...
	I1210 00:00:11.821583   83859 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/client.key
	I1210 00:00:11.821644   83859 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/apiserver.key.9304569d
	I1210 00:00:11.821677   83859 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/proxy-client.key
	I1210 00:00:11.821783   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem (1338 bytes)
	W1210 00:00:11.821811   83859 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253_empty.pem, impossibly tiny 0 bytes
	I1210 00:00:11.821821   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:00:11.821848   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/ca.pem (1082 bytes)
	I1210 00:00:11.821881   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:00:11.821917   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/certs/key.pem (1679 bytes)
	I1210 00:00:11.821965   83859 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem (1708 bytes)
	I1210 00:00:11.822649   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:00:11.867065   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 00:00:11.898744   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:00:11.932711   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:00:08.758664   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:11.257156   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:11.491011   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:13.491036   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:10.645462   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.145646   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.645921   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:12.145804   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:12.644924   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:13.145055   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:13.645811   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:14.144877   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:14.645709   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.145827   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:11.966670   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 00:00:11.997827   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:00:12.023344   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:00:12.048872   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/no-preload-048296/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:00:12.074332   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:00:12.097886   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/certs/26253.pem --> /usr/share/ca-certificates/26253.pem (1338 bytes)
	I1210 00:00:12.121883   83859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/ssl/certs/262532.pem --> /usr/share/ca-certificates/262532.pem (1708 bytes)
	I1210 00:00:12.145451   83859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:00:12.167089   83859 ssh_runner.go:195] Run: openssl version
	I1210 00:00:12.172747   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262532.pem && ln -fs /usr/share/ca-certificates/262532.pem /etc/ssl/certs/262532.pem"
	I1210 00:00:12.183718   83859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262532.pem
	I1210 00:00:12.188458   83859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 22:45 /usr/share/ca-certificates/262532.pem
	I1210 00:00:12.188521   83859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262532.pem
	I1210 00:00:12.194537   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/262532.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:00:12.205725   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:00:12.216441   83859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:00:12.221179   83859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 22:32 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:00:12.221237   83859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:00:12.226942   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:00:12.237830   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26253.pem && ln -fs /usr/share/ca-certificates/26253.pem /etc/ssl/certs/26253.pem"
	I1210 00:00:12.248188   83859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26253.pem
	I1210 00:00:12.253689   83859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 22:45 /usr/share/ca-certificates/26253.pem
	I1210 00:00:12.253747   83859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26253.pem
	I1210 00:00:12.259326   83859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26253.pem /etc/ssl/certs/51391683.0"
	I1210 00:00:12.269796   83859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:00:12.274316   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 00:00:12.280263   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 00:00:12.286235   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 00:00:12.292330   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 00:00:12.298347   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 00:00:12.304186   83859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 00:00:12.310189   83859 kubeadm.go:392] StartCluster: {Name:no-preload-048296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-048296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:00:12.310296   83859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:00:12.310349   83859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:00:12.346589   83859 cri.go:89] found id: ""
	I1210 00:00:12.346666   83859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 00:00:12.357674   83859 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 00:00:12.357701   83859 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 00:00:12.357753   83859 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 00:00:12.367817   83859 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 00:00:12.368827   83859 kubeconfig.go:125] found "no-preload-048296" server: "https://192.168.61.182:8443"
	I1210 00:00:12.371117   83859 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 00:00:12.380367   83859 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.182
	I1210 00:00:12.380393   83859 kubeadm.go:1160] stopping kube-system containers ...
	I1210 00:00:12.380404   83859 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 00:00:12.380446   83859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:00:12.413083   83859 cri.go:89] found id: ""
	I1210 00:00:12.413143   83859 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 00:00:12.429819   83859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:00:12.439244   83859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:00:12.439269   83859 kubeadm.go:157] found existing configuration files:
	
	I1210 00:00:12.439336   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:00:12.448182   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:00:12.448257   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:00:12.457759   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:00:12.467372   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:00:12.467449   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:00:12.476877   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:00:12.485831   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:00:12.485898   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:00:12.495537   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:00:12.504333   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:00:12.504398   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:00:12.514572   83859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:00:12.524134   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:12.619683   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.361844   83859 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.742129567s)
	I1210 00:00:14.361876   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.585241   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.653166   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:14.771204   83859 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:00:14.771306   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.272143   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.771676   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:15.789350   83859 api_server.go:72] duration metric: took 1.018150373s to wait for apiserver process to appear ...
	I1210 00:00:15.789378   83859 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:00:15.789396   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:15.789878   83859 api_server.go:269] stopped: https://192.168.61.182:8443/healthz: Get "https://192.168.61.182:8443/healthz": dial tcp 192.168.61.182:8443: connect: connection refused
	I1210 00:00:16.289518   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:13.757326   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:15.757843   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:15.491876   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:17.991160   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:18.572189   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 00:00:18.572233   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 00:00:18.572262   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:18.607691   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 00:00:18.607732   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 00:00:18.790007   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:18.794252   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 00:00:18.794281   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 00:00:19.289819   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:19.294079   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 00:00:19.294104   83859 api_server.go:103] status: https://192.168.61.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 00:00:19.789647   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:00:19.794119   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 200:
	ok
	I1210 00:00:19.800447   83859 api_server.go:141] control plane version: v1.31.2
	I1210 00:00:19.800477   83859 api_server.go:131] duration metric: took 4.011091942s to wait for apiserver health ...
	I1210 00:00:19.800488   83859 cni.go:84] Creating CNI manager for ""
	I1210 00:00:19.800496   83859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:00:19.802341   83859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:00:15.645243   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:16.145027   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:16.645018   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:17.145001   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:17.644893   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:18.145189   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:18.645579   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.144946   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.645832   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:20.145634   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:19.803715   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:00:19.814324   83859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:00:19.861326   83859 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:00:19.873554   83859 system_pods.go:59] 8 kube-system pods found
	I1210 00:00:19.873588   83859 system_pods.go:61] "coredns-7c65d6cfc9-smnt7" [7d85cb49-3cbb-4133-acab-284ddf0b72f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 00:00:19.873597   83859 system_pods.go:61] "etcd-no-preload-048296" [3eeaede5-b759-49e5-8267-0aaf3c374178] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 00:00:19.873606   83859 system_pods.go:61] "kube-apiserver-no-preload-048296" [8e9d39ce-218a-4fe4-86ea-234d97a5e51d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 00:00:19.873615   83859 system_pods.go:61] "kube-controller-manager-no-preload-048296" [e28ccf7d-13e4-4b55-81d0-2dd348584bca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 00:00:19.873622   83859 system_pods.go:61] "kube-proxy-z479r" [eb6588a6-ac3c-4066-b69e-5613b03ade53] Running
	I1210 00:00:19.873634   83859 system_pods.go:61] "kube-scheduler-no-preload-048296" [0a184373-35d4-4ac6-92fb-65a4faf1c2e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 00:00:19.873646   83859 system_pods.go:61] "metrics-server-6867b74b74-sd58c" [19d64157-7b9b-4a39-8e0c-2c48729fbc93] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:00:19.873653   83859 system_pods.go:61] "storage-provisioner" [30b3ed5d-f843-4090-827f-e1ee6a7ea274] Running
	I1210 00:00:19.873661   83859 system_pods.go:74] duration metric: took 12.308897ms to wait for pod list to return data ...
	I1210 00:00:19.873668   83859 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:00:19.877459   83859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:00:19.877482   83859 node_conditions.go:123] node cpu capacity is 2
	I1210 00:00:19.877496   83859 node_conditions.go:105] duration metric: took 3.822698ms to run NodePressure ...
	I1210 00:00:19.877513   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:00:20.145838   83859 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 00:00:20.151038   83859 kubeadm.go:739] kubelet initialised
	I1210 00:00:20.151059   83859 kubeadm.go:740] duration metric: took 5.1997ms waiting for restarted kubelet to initialise ...
	I1210 00:00:20.151068   83859 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:00:20.157554   83859 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:20.162413   83859 pod_ready.go:98] node "no-preload-048296" hosting pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.162437   83859 pod_ready.go:82] duration metric: took 4.858113ms for pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace to be "Ready" ...
	E1210 00:00:20.162446   83859 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-048296" hosting pod "coredns-7c65d6cfc9-smnt7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.162452   83859 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:20.167285   83859 pod_ready.go:98] node "no-preload-048296" hosting pod "etcd-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.167312   83859 pod_ready.go:82] duration metric: took 4.84903ms for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	E1210 00:00:20.167321   83859 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-048296" hosting pod "etcd-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.167328   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:20.172365   83859 pod_ready.go:98] node "no-preload-048296" hosting pod "kube-apiserver-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.172386   83859 pod_ready.go:82] duration metric: took 5.052446ms for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	E1210 00:00:20.172395   83859 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-048296" hosting pod "kube-apiserver-no-preload-048296" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-048296" has status "Ready":"False"
	I1210 00:00:20.172401   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:18.257283   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:20.756752   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:20.490469   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:22.990635   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:20.645094   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:21.145179   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:21.645829   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.145159   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.645390   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:23.145663   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:23.645205   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:24.145625   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:24.645299   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:25.144971   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:22.180104   83859 pod_ready.go:103] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:24.680524   83859 pod_ready.go:103] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:26.680818   83859 pod_ready.go:103] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:22.757621   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:25.257680   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:24.991719   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:27.490099   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:25.644977   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:26.145967   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:26.645297   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:27.144972   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:27.645030   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.145227   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.645104   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:29.144957   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:29.645516   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:30.145343   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:28.678941   83859 pod_ready.go:93] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:00:28.678972   83859 pod_ready.go:82] duration metric: took 8.506560777s for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:28.678985   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-z479r" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:28.684896   83859 pod_ready.go:93] pod "kube-proxy-z479r" in "kube-system" namespace has status "Ready":"True"
	I1210 00:00:28.684917   83859 pod_ready.go:82] duration metric: took 5.924915ms for pod "kube-proxy-z479r" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:28.684926   83859 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:30.691918   83859 pod_ready.go:103] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:31.691333   83859 pod_ready.go:93] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:00:31.691362   83859 pod_ready.go:82] duration metric: took 3.006429416s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:31.691372   83859 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace to be "Ready" ...
	I1210 00:00:27.756937   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:30.260931   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:29.990322   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:32.489835   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:30.645218   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:31.145393   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:31.645952   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:32.145695   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:32.644910   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.146012   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.645636   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:34.145664   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:34.645813   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:35.145365   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:33.697948   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:36.197360   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:32.756044   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:34.756585   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:36.756683   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:34.490676   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:36.491117   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:38.991231   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:35.645823   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:36.145410   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:36.645665   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:37.144994   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:37.645701   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.144880   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.645735   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:39.145806   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:39.645695   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:40.145692   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:38.198422   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:40.697971   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:39.256015   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:41.256607   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:41.490276   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:43.989182   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:40.644935   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:41.145320   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:41.645470   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:42.145714   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:42.644961   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:43.145687   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:43.644990   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:44.144958   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:44.645780   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:44.645870   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:44.680200   84547 cri.go:89] found id: ""
	I1210 00:00:44.680321   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.680334   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:44.680343   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:44.680413   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:44.715278   84547 cri.go:89] found id: ""
	I1210 00:00:44.715305   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.715312   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:44.715318   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:44.715377   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:44.749908   84547 cri.go:89] found id: ""
	I1210 00:00:44.749933   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.749941   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:44.749946   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:44.750008   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:44.786851   84547 cri.go:89] found id: ""
	I1210 00:00:44.786882   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.786893   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:44.786901   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:44.786966   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:44.830060   84547 cri.go:89] found id: ""
	I1210 00:00:44.830106   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.830117   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:44.830125   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:44.830191   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:44.865525   84547 cri.go:89] found id: ""
	I1210 00:00:44.865560   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.865571   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:44.865579   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:44.865643   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:44.902529   84547 cri.go:89] found id: ""
	I1210 00:00:44.902565   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.902575   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:44.902584   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:44.902647   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:44.938876   84547 cri.go:89] found id: ""
	I1210 00:00:44.938904   84547 logs.go:282] 0 containers: []
	W1210 00:00:44.938914   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:44.938925   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:44.938939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:44.992533   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:44.992570   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:45.005990   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:45.006020   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:45.116774   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:45.116797   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:45.116810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:45.187376   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:45.187411   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:42.698555   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:45.198082   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:43.756559   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:45.756755   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:45.990399   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:47.991485   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:47.728964   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:47.742500   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:47.742560   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:47.775751   84547 cri.go:89] found id: ""
	I1210 00:00:47.775783   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.775793   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:47.775799   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:47.775848   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:47.810202   84547 cri.go:89] found id: ""
	I1210 00:00:47.810228   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.810235   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:47.810241   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:47.810302   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:47.844686   84547 cri.go:89] found id: ""
	I1210 00:00:47.844730   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.844739   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:47.844745   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:47.844802   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:47.884076   84547 cri.go:89] found id: ""
	I1210 00:00:47.884108   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.884119   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:47.884127   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:47.884188   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:47.923697   84547 cri.go:89] found id: ""
	I1210 00:00:47.923722   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.923729   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:47.923734   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:47.923791   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:47.955790   84547 cri.go:89] found id: ""
	I1210 00:00:47.955816   84547 logs.go:282] 0 containers: []
	W1210 00:00:47.955824   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:47.955829   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:47.955890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:48.021501   84547 cri.go:89] found id: ""
	I1210 00:00:48.021529   84547 logs.go:282] 0 containers: []
	W1210 00:00:48.021537   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:48.021543   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:48.021592   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:48.054645   84547 cri.go:89] found id: ""
	I1210 00:00:48.054675   84547 logs.go:282] 0 containers: []
	W1210 00:00:48.054688   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:48.054699   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:48.054714   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:48.135706   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:48.135729   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:48.135746   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:48.212397   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:48.212438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:48.254002   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:48.254033   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:48.305858   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:48.305891   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:47.697503   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:49.698076   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:47.758210   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:50.257400   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:50.490507   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:52.989527   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:50.819644   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:50.834153   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:50.834233   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:50.872357   84547 cri.go:89] found id: ""
	I1210 00:00:50.872391   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.872402   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:50.872409   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:50.872471   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:50.909793   84547 cri.go:89] found id: ""
	I1210 00:00:50.909826   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.909839   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:50.909848   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:50.909915   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:50.947167   84547 cri.go:89] found id: ""
	I1210 00:00:50.947206   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.947219   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:50.947228   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:50.947304   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:50.987194   84547 cri.go:89] found id: ""
	I1210 00:00:50.987219   84547 logs.go:282] 0 containers: []
	W1210 00:00:50.987228   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:50.987234   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:50.987287   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:51.027433   84547 cri.go:89] found id: ""
	I1210 00:00:51.027464   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.027476   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:51.027483   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:51.027586   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:51.062156   84547 cri.go:89] found id: ""
	I1210 00:00:51.062186   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.062200   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:51.062208   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:51.062267   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:51.099161   84547 cri.go:89] found id: ""
	I1210 00:00:51.099187   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.099195   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:51.099202   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:51.099249   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:51.134400   84547 cri.go:89] found id: ""
	I1210 00:00:51.134432   84547 logs.go:282] 0 containers: []
	W1210 00:00:51.134445   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:51.134459   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:51.134474   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:51.171842   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:51.171869   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:51.222815   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:51.222854   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:51.236499   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:51.236537   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:51.307835   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:51.307856   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:51.307871   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:53.888771   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:53.902167   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:53.902234   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:53.940724   84547 cri.go:89] found id: ""
	I1210 00:00:53.940748   84547 logs.go:282] 0 containers: []
	W1210 00:00:53.940756   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:53.940767   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:53.940823   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:53.975076   84547 cri.go:89] found id: ""
	I1210 00:00:53.975111   84547 logs.go:282] 0 containers: []
	W1210 00:00:53.975122   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:53.975128   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:53.975191   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:54.011123   84547 cri.go:89] found id: ""
	I1210 00:00:54.011149   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.011157   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:54.011162   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:54.011207   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:54.043594   84547 cri.go:89] found id: ""
	I1210 00:00:54.043620   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.043628   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:54.043633   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:54.043679   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:54.076172   84547 cri.go:89] found id: ""
	I1210 00:00:54.076208   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.076219   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:54.076227   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:54.076292   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:54.110646   84547 cri.go:89] found id: ""
	I1210 00:00:54.110679   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.110691   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:54.110711   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:54.110784   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:54.145901   84547 cri.go:89] found id: ""
	I1210 00:00:54.145929   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.145940   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:54.145947   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:54.146007   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:54.178589   84547 cri.go:89] found id: ""
	I1210 00:00:54.178618   84547 logs.go:282] 0 containers: []
	W1210 00:00:54.178629   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:54.178639   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:54.178652   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:54.231005   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:54.231040   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:00:54.244583   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:54.244608   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:54.318759   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:54.318787   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:54.318800   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:54.395975   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:54.396012   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:52.198012   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:54.199221   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:56.697929   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:52.756419   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:54.757807   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:57.257555   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:55.490621   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:57.491632   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:56.969699   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:00:56.985159   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:00:56.985232   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:00:57.020768   84547 cri.go:89] found id: ""
	I1210 00:00:57.020798   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.020807   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:00:57.020812   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:00:57.020861   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:00:57.055796   84547 cri.go:89] found id: ""
	I1210 00:00:57.055826   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.055834   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:00:57.055839   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:00:57.055897   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:00:57.089345   84547 cri.go:89] found id: ""
	I1210 00:00:57.089375   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.089385   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:00:57.089392   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:00:57.089460   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:00:57.123969   84547 cri.go:89] found id: ""
	I1210 00:00:57.123993   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.124004   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:00:57.124012   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:00:57.124075   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:00:57.158744   84547 cri.go:89] found id: ""
	I1210 00:00:57.158769   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.158777   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:00:57.158783   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:00:57.158842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:00:57.203933   84547 cri.go:89] found id: ""
	I1210 00:00:57.203954   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.203962   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:00:57.203968   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:00:57.204025   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:00:57.238180   84547 cri.go:89] found id: ""
	I1210 00:00:57.238213   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.238224   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:00:57.238231   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:00:57.238287   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:00:57.273583   84547 cri.go:89] found id: ""
	I1210 00:00:57.273612   84547 logs.go:282] 0 containers: []
	W1210 00:00:57.273623   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:00:57.273633   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:00:57.273648   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:00:57.345992   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:00:57.346019   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:00:57.346035   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:57.428335   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:00:57.428369   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:00:57.466437   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:00:57.466472   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:00:57.517064   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:00:57.517099   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:00.030599   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:00.045151   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:00.045229   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:00.083050   84547 cri.go:89] found id: ""
	I1210 00:01:00.083074   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.083085   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:00.083093   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:00.083152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:00.119082   84547 cri.go:89] found id: ""
	I1210 00:01:00.119107   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.119120   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:00.119126   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:00.119185   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:00.166888   84547 cri.go:89] found id: ""
	I1210 00:01:00.166921   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.166931   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:00.166939   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:00.166998   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:00.209504   84547 cri.go:89] found id: ""
	I1210 00:01:00.209526   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.209533   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:00.209539   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:00.209595   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:00.247649   84547 cri.go:89] found id: ""
	I1210 00:01:00.247672   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.247680   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:00.247686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:00.247736   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:00.294419   84547 cri.go:89] found id: ""
	I1210 00:01:00.294445   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.294455   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:00.294463   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:00.294526   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:00.328634   84547 cri.go:89] found id: ""
	I1210 00:01:00.328667   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.328677   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:00.328684   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:00.328751   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:00.364667   84547 cri.go:89] found id: ""
	I1210 00:01:00.364696   84547 logs.go:282] 0 containers: []
	W1210 00:01:00.364724   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:00.364733   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:00.364745   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:00.377518   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:00.377552   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:00.449145   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:00.449166   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:00.449178   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:00:58.698000   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:01.196968   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:59.756367   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:01.756407   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:00:59.989896   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:01.991121   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:00.529462   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:00.529499   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:00.570471   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:00.570503   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:03.122835   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:03.136329   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:03.136405   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:03.170352   84547 cri.go:89] found id: ""
	I1210 00:01:03.170380   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.170388   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:03.170393   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:03.170454   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:03.204339   84547 cri.go:89] found id: ""
	I1210 00:01:03.204368   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.204379   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:03.204386   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:03.204448   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:03.237516   84547 cri.go:89] found id: ""
	I1210 00:01:03.237559   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.237572   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:03.237579   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:03.237641   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:03.273379   84547 cri.go:89] found id: ""
	I1210 00:01:03.273407   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.273416   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:03.273421   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:03.274017   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:03.308987   84547 cri.go:89] found id: ""
	I1210 00:01:03.309026   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.309038   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:03.309046   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:03.309102   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:03.341913   84547 cri.go:89] found id: ""
	I1210 00:01:03.341943   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.341954   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:03.341961   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:03.342009   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:03.375403   84547 cri.go:89] found id: ""
	I1210 00:01:03.375429   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.375437   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:03.375442   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:03.375494   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:03.408352   84547 cri.go:89] found id: ""
	I1210 00:01:03.408380   84547 logs.go:282] 0 containers: []
	W1210 00:01:03.408387   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:03.408397   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:03.408409   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:03.483789   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:03.483831   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:03.532021   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:03.532050   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:03.585095   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:03.585129   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:03.600062   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:03.600089   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:03.671633   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:03.197410   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:05.698110   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:03.757014   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:06.257259   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:04.490159   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:06.490493   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:08.991447   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:06.172345   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:06.199809   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:06.199897   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:06.240690   84547 cri.go:89] found id: ""
	I1210 00:01:06.240749   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.240760   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:06.240769   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:06.240822   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:06.275743   84547 cri.go:89] found id: ""
	I1210 00:01:06.275770   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.275779   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:06.275786   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:06.275848   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:06.310399   84547 cri.go:89] found id: ""
	I1210 00:01:06.310427   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.310438   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:06.310445   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:06.310507   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:06.345278   84547 cri.go:89] found id: ""
	I1210 00:01:06.345308   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.345320   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:06.345328   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:06.345394   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:06.381926   84547 cri.go:89] found id: ""
	I1210 00:01:06.381961   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.381988   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:06.381994   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:06.382064   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:06.415341   84547 cri.go:89] found id: ""
	I1210 00:01:06.415367   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.415377   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:06.415385   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:06.415438   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:06.449701   84547 cri.go:89] found id: ""
	I1210 00:01:06.449733   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.449743   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:06.449750   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:06.449814   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:06.484271   84547 cri.go:89] found id: ""
	I1210 00:01:06.484298   84547 logs.go:282] 0 containers: []
	W1210 00:01:06.484307   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:06.484317   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:06.484333   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:06.570279   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:06.570313   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:06.607812   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:06.607845   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:06.658206   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:06.658243   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:06.670949   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:06.670978   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:06.745785   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:09.246367   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:09.259390   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:09.259462   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:09.291811   84547 cri.go:89] found id: ""
	I1210 00:01:09.291841   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.291853   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:09.291865   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:09.291922   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:09.326996   84547 cri.go:89] found id: ""
	I1210 00:01:09.327027   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.327038   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:09.327045   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:09.327109   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:09.361026   84547 cri.go:89] found id: ""
	I1210 00:01:09.361062   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.361073   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:09.361081   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:09.361152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:09.397116   84547 cri.go:89] found id: ""
	I1210 00:01:09.397148   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.397159   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:09.397166   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:09.397225   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:09.428989   84547 cri.go:89] found id: ""
	I1210 00:01:09.429025   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.429039   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:09.429046   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:09.429111   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:09.462491   84547 cri.go:89] found id: ""
	I1210 00:01:09.462535   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.462547   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:09.462572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:09.462652   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:09.502705   84547 cri.go:89] found id: ""
	I1210 00:01:09.502728   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.502735   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:09.502740   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:09.502798   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:09.538721   84547 cri.go:89] found id: ""
	I1210 00:01:09.538744   84547 logs.go:282] 0 containers: []
	W1210 00:01:09.538754   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:09.538766   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:09.538779   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:09.590732   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:09.590768   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:09.604928   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:09.604960   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:09.681457   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:09.681484   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:09.681498   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:09.758116   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:09.758149   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:07.698904   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:10.198215   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:08.755738   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:10.756172   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:10.992252   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:13.491283   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:12.307016   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:12.320284   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:12.320371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:12.351984   84547 cri.go:89] found id: ""
	I1210 00:01:12.352007   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.352016   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:12.352024   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:12.352086   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:12.387797   84547 cri.go:89] found id: ""
	I1210 00:01:12.387829   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.387840   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:12.387848   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:12.387924   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:12.417934   84547 cri.go:89] found id: ""
	I1210 00:01:12.417968   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.417979   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:12.417987   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:12.418048   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:12.451771   84547 cri.go:89] found id: ""
	I1210 00:01:12.451802   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.451815   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:12.451822   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:12.451890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:12.485007   84547 cri.go:89] found id: ""
	I1210 00:01:12.485037   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.485048   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:12.485055   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:12.485117   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:12.527851   84547 cri.go:89] found id: ""
	I1210 00:01:12.527879   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.527895   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:12.527905   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:12.527967   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:12.560341   84547 cri.go:89] found id: ""
	I1210 00:01:12.560364   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.560372   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:12.560377   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:12.560428   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:12.593118   84547 cri.go:89] found id: ""
	I1210 00:01:12.593143   84547 logs.go:282] 0 containers: []
	W1210 00:01:12.593150   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:12.593158   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:12.593169   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:12.658694   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:12.658717   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:12.658749   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:12.739742   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:12.739776   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:12.780949   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:12.780977   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:12.829570   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:12.829607   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:15.344239   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:15.358424   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:15.358527   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:15.390712   84547 cri.go:89] found id: ""
	I1210 00:01:15.390744   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.390757   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:15.390764   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:15.390847   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:15.424198   84547 cri.go:89] found id: ""
	I1210 00:01:15.424226   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.424234   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:15.424239   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:15.424306   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:15.458340   84547 cri.go:89] found id: ""
	I1210 00:01:15.458363   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.458370   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:15.458377   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:15.458422   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:15.491468   84547 cri.go:89] found id: ""
	I1210 00:01:15.491497   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.491507   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:15.491520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:15.491597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:12.698224   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:15.197841   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:12.756618   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:14.759492   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:17.256075   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:15.491494   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:17.989410   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:15.528405   84547 cri.go:89] found id: ""
	I1210 00:01:15.528437   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.528448   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:15.528455   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:15.528517   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:15.562966   84547 cri.go:89] found id: ""
	I1210 00:01:15.562995   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.563005   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:15.563012   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:15.563063   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:15.595815   84547 cri.go:89] found id: ""
	I1210 00:01:15.595838   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.595845   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:15.595850   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:15.595907   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:15.631296   84547 cri.go:89] found id: ""
	I1210 00:01:15.631322   84547 logs.go:282] 0 containers: []
	W1210 00:01:15.631333   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:15.631347   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:15.631362   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:15.680177   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:15.680213   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:15.693685   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:15.693724   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:15.760285   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:15.760312   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:15.760326   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:15.837814   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:15.837855   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:18.377112   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:18.390167   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:18.390230   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:18.424095   84547 cri.go:89] found id: ""
	I1210 00:01:18.424127   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.424140   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:18.424150   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:18.424216   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:18.456840   84547 cri.go:89] found id: ""
	I1210 00:01:18.456868   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.456876   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:18.456882   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:18.456938   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:18.492109   84547 cri.go:89] found id: ""
	I1210 00:01:18.492134   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.492145   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:18.492153   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:18.492212   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:18.530445   84547 cri.go:89] found id: ""
	I1210 00:01:18.530472   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.530480   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:18.530486   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:18.530549   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:18.567036   84547 cri.go:89] found id: ""
	I1210 00:01:18.567060   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.567070   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:18.567077   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:18.567136   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:18.606829   84547 cri.go:89] found id: ""
	I1210 00:01:18.606853   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.606863   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:18.606870   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:18.606927   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:18.646032   84547 cri.go:89] found id: ""
	I1210 00:01:18.646061   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.646070   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:18.646075   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:18.646127   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:18.683969   84547 cri.go:89] found id: ""
	I1210 00:01:18.683997   84547 logs.go:282] 0 containers: []
	W1210 00:01:18.684008   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:18.684019   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:18.684036   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:18.735760   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:18.735807   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:18.751689   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:18.751724   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:18.823252   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:18.823272   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:18.823287   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:18.908071   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:18.908110   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:17.698577   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:20.197901   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:19.757398   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:22.255920   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:20.490512   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:22.990168   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:21.445415   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:21.458091   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:21.458152   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:21.492615   84547 cri.go:89] found id: ""
	I1210 00:01:21.492646   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.492657   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:21.492664   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:21.492718   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:21.529564   84547 cri.go:89] found id: ""
	I1210 00:01:21.529586   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.529594   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:21.529599   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:21.529669   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:21.566409   84547 cri.go:89] found id: ""
	I1210 00:01:21.566441   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.566450   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:21.566456   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:21.566528   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:21.601783   84547 cri.go:89] found id: ""
	I1210 00:01:21.601815   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.601827   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:21.601835   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:21.601895   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:21.635740   84547 cri.go:89] found id: ""
	I1210 00:01:21.635762   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.635769   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:21.635775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:21.635831   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:21.667777   84547 cri.go:89] found id: ""
	I1210 00:01:21.667806   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.667826   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:21.667834   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:21.667894   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:21.704355   84547 cri.go:89] found id: ""
	I1210 00:01:21.704380   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.704388   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:21.704398   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:21.704457   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:21.737365   84547 cri.go:89] found id: ""
	I1210 00:01:21.737398   84547 logs.go:282] 0 containers: []
	W1210 00:01:21.737410   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:21.737422   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:21.737438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:21.751394   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:21.751434   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:21.823217   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:21.823240   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:21.823255   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:21.910097   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:21.910131   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:21.950087   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:21.950123   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:24.504626   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:24.517920   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:24.517997   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:24.551766   84547 cri.go:89] found id: ""
	I1210 00:01:24.551806   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.551814   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:24.551821   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:24.551880   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:24.588210   84547 cri.go:89] found id: ""
	I1210 00:01:24.588246   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.588256   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:24.588263   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:24.588341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:24.620569   84547 cri.go:89] found id: ""
	I1210 00:01:24.620598   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.620607   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:24.620613   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:24.620673   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:24.657527   84547 cri.go:89] found id: ""
	I1210 00:01:24.657550   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.657558   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:24.657564   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:24.657636   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:24.689382   84547 cri.go:89] found id: ""
	I1210 00:01:24.689410   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.689418   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:24.689423   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:24.689475   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:24.727170   84547 cri.go:89] found id: ""
	I1210 00:01:24.727207   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.727224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:24.727230   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:24.727280   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:24.760733   84547 cri.go:89] found id: ""
	I1210 00:01:24.760759   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.760769   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:24.760775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:24.760842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:24.794750   84547 cri.go:89] found id: ""
	I1210 00:01:24.794782   84547 logs.go:282] 0 containers: []
	W1210 00:01:24.794791   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:24.794799   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:24.794810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:24.847403   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:24.847441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:24.862240   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:24.862273   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:24.936373   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:24.936396   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:24.936409   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:25.011126   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:25.011160   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:22.198527   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:24.697981   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:24.257466   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:26.258086   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:24.990792   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:26.991843   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:27.551134   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:27.564526   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:27.564665   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:27.600652   84547 cri.go:89] found id: ""
	I1210 00:01:27.600677   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.600685   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:27.600690   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:27.600753   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:27.643593   84547 cri.go:89] found id: ""
	I1210 00:01:27.643625   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.643636   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:27.643649   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:27.643705   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:27.680633   84547 cri.go:89] found id: ""
	I1210 00:01:27.680664   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.680676   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:27.680684   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:27.680748   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:27.721790   84547 cri.go:89] found id: ""
	I1210 00:01:27.721824   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.721832   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:27.721837   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:27.721901   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:27.756462   84547 cri.go:89] found id: ""
	I1210 00:01:27.756492   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.756504   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:27.756512   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:27.756574   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:27.792692   84547 cri.go:89] found id: ""
	I1210 00:01:27.792728   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.792838   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:27.792851   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:27.792912   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:27.830065   84547 cri.go:89] found id: ""
	I1210 00:01:27.830098   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.830107   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:27.830116   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:27.830182   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:27.867516   84547 cri.go:89] found id: ""
	I1210 00:01:27.867557   84547 logs.go:282] 0 containers: []
	W1210 00:01:27.867580   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:27.867595   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:27.867611   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:27.917622   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:27.917660   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:27.931687   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:27.931717   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:28.003827   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:28.003860   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:28.003876   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:28.081375   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:28.081410   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:27.197999   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:29.198364   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:31.698061   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:28.755855   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:30.756687   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:29.489992   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:31.490850   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:33.990464   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:30.622885   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:30.635990   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:30.636065   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:30.671418   84547 cri.go:89] found id: ""
	I1210 00:01:30.671453   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.671465   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:30.671475   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:30.671548   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:30.707168   84547 cri.go:89] found id: ""
	I1210 00:01:30.707203   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.707214   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:30.707222   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:30.707275   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:30.741349   84547 cri.go:89] found id: ""
	I1210 00:01:30.741377   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.741386   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:30.741395   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:30.741449   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:30.774952   84547 cri.go:89] found id: ""
	I1210 00:01:30.774985   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.774997   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:30.775005   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:30.775062   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:30.809596   84547 cri.go:89] found id: ""
	I1210 00:01:30.809623   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.809631   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:30.809636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:30.809699   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:30.844256   84547 cri.go:89] found id: ""
	I1210 00:01:30.844288   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.844300   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:30.844308   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:30.844371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:30.876086   84547 cri.go:89] found id: ""
	I1210 00:01:30.876113   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.876124   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:30.876131   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:30.876195   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:30.910858   84547 cri.go:89] found id: ""
	I1210 00:01:30.910884   84547 logs.go:282] 0 containers: []
	W1210 00:01:30.910895   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:30.910905   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:30.910920   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:30.990855   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:30.990876   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:30.990888   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:31.069186   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:31.069221   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:31.109653   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:31.109689   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:31.164962   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:31.165001   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:33.679784   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:33.692222   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:33.692310   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:33.727937   84547 cri.go:89] found id: ""
	I1210 00:01:33.727960   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.727967   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:33.727973   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:33.728022   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:33.762122   84547 cri.go:89] found id: ""
	I1210 00:01:33.762151   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.762162   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:33.762171   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:33.762237   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:33.793855   84547 cri.go:89] found id: ""
	I1210 00:01:33.793883   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.793894   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:33.793903   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:33.793961   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:33.827240   84547 cri.go:89] found id: ""
	I1210 00:01:33.827280   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.827292   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:33.827300   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:33.827366   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:33.863705   84547 cri.go:89] found id: ""
	I1210 00:01:33.863728   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.863738   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:33.863743   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:33.863792   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:33.897189   84547 cri.go:89] found id: ""
	I1210 00:01:33.897213   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.897224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:33.897229   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:33.897282   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:33.930001   84547 cri.go:89] found id: ""
	I1210 00:01:33.930034   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.930044   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:33.930052   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:33.930113   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:33.967330   84547 cri.go:89] found id: ""
	I1210 00:01:33.967359   84547 logs.go:282] 0 containers: []
	W1210 00:01:33.967367   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:33.967378   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:33.967390   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:34.051952   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:34.051996   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:34.087729   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:34.087762   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:34.137879   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:34.137915   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:34.151989   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:34.152025   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:34.225587   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:33.698633   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:36.197023   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:33.257021   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:35.258050   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:36.489900   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:38.490643   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:36.726065   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:36.740513   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:36.740594   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:36.774959   84547 cri.go:89] found id: ""
	I1210 00:01:36.774991   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.775000   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:36.775005   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:36.775070   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:36.808888   84547 cri.go:89] found id: ""
	I1210 00:01:36.808921   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.808934   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:36.808941   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:36.809001   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:36.841695   84547 cri.go:89] found id: ""
	I1210 00:01:36.841727   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.841738   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:36.841748   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:36.841840   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:36.877479   84547 cri.go:89] found id: ""
	I1210 00:01:36.877509   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.877522   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:36.877531   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:36.877600   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:36.909232   84547 cri.go:89] found id: ""
	I1210 00:01:36.909257   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.909265   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:36.909271   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:36.909328   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:36.945858   84547 cri.go:89] found id: ""
	I1210 00:01:36.945892   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.945904   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:36.945912   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:36.945981   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:36.978559   84547 cri.go:89] found id: ""
	I1210 00:01:36.978592   84547 logs.go:282] 0 containers: []
	W1210 00:01:36.978604   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:36.978611   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:36.978674   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:37.015523   84547 cri.go:89] found id: ""
	I1210 00:01:37.015555   84547 logs.go:282] 0 containers: []
	W1210 00:01:37.015587   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:37.015598   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:37.015613   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:37.094825   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:37.094876   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:37.134442   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:37.134470   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:37.184691   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:37.184728   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:37.198801   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:37.198828   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:37.268415   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:39.768790   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:39.783106   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:39.783184   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:39.816629   84547 cri.go:89] found id: ""
	I1210 00:01:39.816655   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.816665   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:39.816672   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:39.816749   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:39.851518   84547 cri.go:89] found id: ""
	I1210 00:01:39.851550   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.851581   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:39.851590   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:39.851648   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:39.887790   84547 cri.go:89] found id: ""
	I1210 00:01:39.887827   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.887842   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:39.887852   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:39.887924   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:39.922230   84547 cri.go:89] found id: ""
	I1210 00:01:39.922255   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.922262   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:39.922268   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:39.922332   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:39.957142   84547 cri.go:89] found id: ""
	I1210 00:01:39.957174   84547 logs.go:282] 0 containers: []
	W1210 00:01:39.957184   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:39.957192   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:39.957254   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:40.004583   84547 cri.go:89] found id: ""
	I1210 00:01:40.004609   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.004618   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:40.004624   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:40.004675   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:40.038278   84547 cri.go:89] found id: ""
	I1210 00:01:40.038300   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.038308   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:40.038313   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:40.038366   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:40.071920   84547 cri.go:89] found id: ""
	I1210 00:01:40.071947   84547 logs.go:282] 0 containers: []
	W1210 00:01:40.071954   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:40.071963   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:40.071973   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:40.142005   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:40.142033   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:40.142049   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:40.219413   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:40.219452   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:40.260786   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:40.260822   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:40.315943   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:40.315988   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:38.198247   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:40.198455   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:37.756243   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:39.756567   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:41.757107   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:40.990316   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:42.990948   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:42.832592   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:42.847532   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:42.847630   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:42.879313   84547 cri.go:89] found id: ""
	I1210 00:01:42.879341   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.879348   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:42.879354   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:42.879403   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:42.914839   84547 cri.go:89] found id: ""
	I1210 00:01:42.914866   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.914877   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:42.914884   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:42.914947   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:42.949514   84547 cri.go:89] found id: ""
	I1210 00:01:42.949553   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.949564   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:42.949572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:42.949632   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:42.987974   84547 cri.go:89] found id: ""
	I1210 00:01:42.988004   84547 logs.go:282] 0 containers: []
	W1210 00:01:42.988021   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:42.988029   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:42.988087   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:43.022835   84547 cri.go:89] found id: ""
	I1210 00:01:43.022860   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.022867   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:43.022873   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:43.022921   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:43.055940   84547 cri.go:89] found id: ""
	I1210 00:01:43.055967   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.055975   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:43.055981   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:43.056030   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:43.088728   84547 cri.go:89] found id: ""
	I1210 00:01:43.088754   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.088762   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:43.088769   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:43.088827   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:43.122809   84547 cri.go:89] found id: ""
	I1210 00:01:43.122842   84547 logs.go:282] 0 containers: []
	W1210 00:01:43.122853   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:43.122865   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:43.122881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:43.172243   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:43.172277   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:43.186566   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:43.186596   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:43.264301   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:43.264327   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:43.264341   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:43.339804   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:43.339848   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:42.698066   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:45.197813   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:44.256069   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:46.256746   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:45.490253   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:47.989687   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:45.881356   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:45.894702   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:45.894779   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:45.928927   84547 cri.go:89] found id: ""
	I1210 00:01:45.928951   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.928958   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:45.928964   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:45.929009   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:45.965271   84547 cri.go:89] found id: ""
	I1210 00:01:45.965303   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.965315   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:45.965323   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:45.965392   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:45.999059   84547 cri.go:89] found id: ""
	I1210 00:01:45.999082   84547 logs.go:282] 0 containers: []
	W1210 00:01:45.999090   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:45.999095   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:45.999140   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:46.034425   84547 cri.go:89] found id: ""
	I1210 00:01:46.034456   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.034468   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:46.034476   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:46.034529   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:46.067946   84547 cri.go:89] found id: ""
	I1210 00:01:46.067970   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.067986   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:46.067993   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:46.068056   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:46.101671   84547 cri.go:89] found id: ""
	I1210 00:01:46.101699   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.101710   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:46.101718   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:46.101783   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:46.135860   84547 cri.go:89] found id: ""
	I1210 00:01:46.135885   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.135893   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:46.135898   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:46.135948   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:46.177398   84547 cri.go:89] found id: ""
	I1210 00:01:46.177439   84547 logs.go:282] 0 containers: []
	W1210 00:01:46.177450   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:46.177461   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:46.177476   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:46.248134   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:46.248160   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:46.248175   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:46.323652   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:46.323688   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:46.364017   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:46.364044   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:46.429480   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:46.429524   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:48.956518   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:48.970209   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:48.970294   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:49.008933   84547 cri.go:89] found id: ""
	I1210 00:01:49.008966   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.008977   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:49.008986   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:49.009050   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:49.044802   84547 cri.go:89] found id: ""
	I1210 00:01:49.044833   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.044843   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:49.044850   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:49.044921   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:49.077484   84547 cri.go:89] found id: ""
	I1210 00:01:49.077517   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.077525   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:49.077530   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:49.077576   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:49.110144   84547 cri.go:89] found id: ""
	I1210 00:01:49.110174   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.110186   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:49.110193   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:49.110255   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:49.142591   84547 cri.go:89] found id: ""
	I1210 00:01:49.142622   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.142633   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:49.142646   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:49.142709   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:49.175456   84547 cri.go:89] found id: ""
	I1210 00:01:49.175486   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.175497   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:49.175505   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:49.175603   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:49.208341   84547 cri.go:89] found id: ""
	I1210 00:01:49.208368   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.208379   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:49.208387   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:49.208445   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:49.241486   84547 cri.go:89] found id: ""
	I1210 00:01:49.241509   84547 logs.go:282] 0 containers: []
	W1210 00:01:49.241518   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:49.241615   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:49.241647   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:49.280023   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:49.280051   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:49.328822   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:49.328858   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:49.343076   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:49.343104   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:49.418010   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:49.418037   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:49.418051   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:47.198917   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:49.697840   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:48.757230   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:51.256927   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:49.990701   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:52.490649   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:52.004053   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:52.017350   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:52.017418   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:52.055617   84547 cri.go:89] found id: ""
	I1210 00:01:52.055641   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.055648   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:52.055654   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:52.055712   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:52.088601   84547 cri.go:89] found id: ""
	I1210 00:01:52.088629   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.088637   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:52.088642   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:52.088694   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:52.121880   84547 cri.go:89] found id: ""
	I1210 00:01:52.121912   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.121922   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:52.121928   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:52.121986   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:52.161281   84547 cri.go:89] found id: ""
	I1210 00:01:52.161321   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.161334   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:52.161341   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:52.161406   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:52.222768   84547 cri.go:89] found id: ""
	I1210 00:01:52.222793   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.222800   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:52.222806   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:52.222862   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:52.271441   84547 cri.go:89] found id: ""
	I1210 00:01:52.271465   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.271473   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:52.271479   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:52.271526   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:52.311128   84547 cri.go:89] found id: ""
	I1210 00:01:52.311152   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.311160   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:52.311165   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:52.311211   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:52.348862   84547 cri.go:89] found id: ""
	I1210 00:01:52.348885   84547 logs.go:282] 0 containers: []
	W1210 00:01:52.348892   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:52.348900   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:52.348913   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:52.401280   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:52.401324   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:52.415532   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:52.415580   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:52.484956   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:52.484979   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:52.484994   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:52.565102   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:52.565137   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:55.106446   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:55.121756   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:55.121846   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:55.158601   84547 cri.go:89] found id: ""
	I1210 00:01:55.158632   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.158643   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:55.158650   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:55.158712   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:55.192424   84547 cri.go:89] found id: ""
	I1210 00:01:55.192454   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.192464   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:55.192471   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:55.192530   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:55.226178   84547 cri.go:89] found id: ""
	I1210 00:01:55.226204   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.226213   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:55.226222   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:55.226285   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:55.264123   84547 cri.go:89] found id: ""
	I1210 00:01:55.264148   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.264161   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:55.264169   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:55.264226   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:55.302476   84547 cri.go:89] found id: ""
	I1210 00:01:55.302503   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.302512   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:55.302520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:55.302597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:55.342308   84547 cri.go:89] found id: ""
	I1210 00:01:55.342341   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.342352   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:55.342360   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:55.342419   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:55.382203   84547 cri.go:89] found id: ""
	I1210 00:01:55.382226   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.382232   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:55.382238   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:55.382286   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:55.421381   84547 cri.go:89] found id: ""
	I1210 00:01:55.421409   84547 logs.go:282] 0 containers: []
	W1210 00:01:55.421421   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:55.421432   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:55.421449   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:55.473758   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:55.473793   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:55.488138   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:55.488166   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:01:52.198005   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:54.697589   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:56.697748   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:53.756310   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:55.756964   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:54.990271   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:57.490303   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	W1210 00:01:55.567216   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:55.567240   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:55.567251   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:55.648276   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:55.648319   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:58.185245   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:01:58.199015   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:01:58.199091   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:01:58.232327   84547 cri.go:89] found id: ""
	I1210 00:01:58.232352   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.232360   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:01:58.232368   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:01:58.232436   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:01:58.270312   84547 cri.go:89] found id: ""
	I1210 00:01:58.270342   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.270353   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:01:58.270360   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:01:58.270420   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:01:58.303389   84547 cri.go:89] found id: ""
	I1210 00:01:58.303415   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.303422   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:01:58.303427   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:01:58.303486   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:01:58.338703   84547 cri.go:89] found id: ""
	I1210 00:01:58.338735   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.338747   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:01:58.338755   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:01:58.338817   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:01:58.376727   84547 cri.go:89] found id: ""
	I1210 00:01:58.376759   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.376770   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:01:58.376779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:01:58.376841   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:01:58.410192   84547 cri.go:89] found id: ""
	I1210 00:01:58.410217   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.410224   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:01:58.410230   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:01:58.410286   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:01:58.449772   84547 cri.go:89] found id: ""
	I1210 00:01:58.449794   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.449802   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:01:58.449807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:01:58.449859   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:01:58.484285   84547 cri.go:89] found id: ""
	I1210 00:01:58.484316   84547 logs.go:282] 0 containers: []
	W1210 00:01:58.484328   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:01:58.484339   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:01:58.484356   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:01:58.538402   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:01:58.538438   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:01:58.551361   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:01:58.551391   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:01:58.613809   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:01:58.613836   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:01:58.613850   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:01:58.689606   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:01:58.689640   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:01:58.698283   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:01.197599   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:57.758229   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:00.256339   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:01:59.490413   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:01.491035   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:03.990725   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:01.230924   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:01.244878   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:01.244990   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:01.280475   84547 cri.go:89] found id: ""
	I1210 00:02:01.280501   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.280509   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:01.280514   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:01.280561   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:01.323042   84547 cri.go:89] found id: ""
	I1210 00:02:01.323065   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.323071   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:01.323077   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:01.323124   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:01.356146   84547 cri.go:89] found id: ""
	I1210 00:02:01.356171   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.356181   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:01.356190   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:01.356247   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:01.392542   84547 cri.go:89] found id: ""
	I1210 00:02:01.392567   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.392577   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:01.392584   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:01.392721   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:01.426232   84547 cri.go:89] found id: ""
	I1210 00:02:01.426257   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.426268   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:01.426275   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:01.426341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:01.459754   84547 cri.go:89] found id: ""
	I1210 00:02:01.459786   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.459798   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:01.459806   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:01.459865   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:01.492406   84547 cri.go:89] found id: ""
	I1210 00:02:01.492435   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.492445   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:01.492450   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:01.492499   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:01.532012   84547 cri.go:89] found id: ""
	I1210 00:02:01.532034   84547 logs.go:282] 0 containers: []
	W1210 00:02:01.532042   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:01.532049   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:01.532060   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:01.583145   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:01.583181   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:01.596910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:01.596939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:01.670480   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:01.670506   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:01.670534   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:01.748001   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:01.748041   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:04.291065   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:04.304507   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:04.304587   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:04.341673   84547 cri.go:89] found id: ""
	I1210 00:02:04.341700   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.341713   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:04.341720   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:04.341772   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:04.379743   84547 cri.go:89] found id: ""
	I1210 00:02:04.379776   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.379787   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:04.379795   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:04.379856   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:04.413532   84547 cri.go:89] found id: ""
	I1210 00:02:04.413562   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.413573   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:04.413588   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:04.413648   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:04.449192   84547 cri.go:89] found id: ""
	I1210 00:02:04.449221   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.449231   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:04.449238   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:04.449324   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:04.484638   84547 cri.go:89] found id: ""
	I1210 00:02:04.484666   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.484677   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:04.484686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:04.484745   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:04.524854   84547 cri.go:89] found id: ""
	I1210 00:02:04.524889   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.524903   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:04.524912   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:04.524976   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:04.564697   84547 cri.go:89] found id: ""
	I1210 00:02:04.564726   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.564737   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:04.564748   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:04.564797   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:04.603506   84547 cri.go:89] found id: ""
	I1210 00:02:04.603534   84547 logs.go:282] 0 containers: []
	W1210 00:02:04.603544   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:04.603554   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:04.603583   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:04.653025   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:04.653062   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:04.666833   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:04.666878   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:04.745491   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:04.745513   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:04.745526   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:04.825267   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:04.825304   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:03.702878   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:06.197834   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:02.755541   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:04.757334   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:07.256160   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:06.490491   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:08.491144   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:07.365419   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:07.378968   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:07.379030   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:07.412830   84547 cri.go:89] found id: ""
	I1210 00:02:07.412859   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.412868   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:07.412873   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:07.412938   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:07.445622   84547 cri.go:89] found id: ""
	I1210 00:02:07.445661   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.445674   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:07.445682   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:07.445742   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:07.480431   84547 cri.go:89] found id: ""
	I1210 00:02:07.480466   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.480474   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:07.480480   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:07.480533   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:07.518741   84547 cri.go:89] found id: ""
	I1210 00:02:07.518776   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.518790   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:07.518797   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:07.518860   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:07.552193   84547 cri.go:89] found id: ""
	I1210 00:02:07.552216   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.552223   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:07.552229   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:07.552275   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:07.585762   84547 cri.go:89] found id: ""
	I1210 00:02:07.585784   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.585792   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:07.585798   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:07.585843   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:07.618619   84547 cri.go:89] found id: ""
	I1210 00:02:07.618645   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.618653   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:07.618659   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:07.618709   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:07.653377   84547 cri.go:89] found id: ""
	I1210 00:02:07.653418   84547 logs.go:282] 0 containers: []
	W1210 00:02:07.653428   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:07.653440   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:07.653456   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:07.709366   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:07.709401   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:07.723762   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:07.723792   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:07.804849   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:07.804869   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:07.804886   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:07.887117   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:07.887154   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:10.423120   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:10.436563   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:10.436628   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:10.470622   84547 cri.go:89] found id: ""
	I1210 00:02:10.470650   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.470658   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:10.470664   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:10.470735   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:10.506211   84547 cri.go:89] found id: ""
	I1210 00:02:10.506238   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.506250   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:10.506257   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:10.506368   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:08.198217   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:10.699364   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:09.256492   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:11.256897   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:10.990662   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:13.491635   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:10.541846   84547 cri.go:89] found id: ""
	I1210 00:02:10.541871   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.541879   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:10.541885   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:10.541952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:10.581391   84547 cri.go:89] found id: ""
	I1210 00:02:10.581416   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.581427   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:10.581435   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:10.581503   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:10.615172   84547 cri.go:89] found id: ""
	I1210 00:02:10.615206   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.615216   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:10.615223   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:10.615289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:10.650791   84547 cri.go:89] found id: ""
	I1210 00:02:10.650813   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.650821   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:10.650826   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:10.650876   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:10.685428   84547 cri.go:89] found id: ""
	I1210 00:02:10.685452   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.685460   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:10.685466   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:10.685524   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:10.719139   84547 cri.go:89] found id: ""
	I1210 00:02:10.719174   84547 logs.go:282] 0 containers: []
	W1210 00:02:10.719186   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:10.719196   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:10.719211   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:10.732045   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:10.732073   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:10.805084   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:10.805111   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:10.805127   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:10.888301   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:10.888337   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:10.926005   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:10.926033   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:13.479317   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:13.494021   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:13.494089   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:13.527753   84547 cri.go:89] found id: ""
	I1210 00:02:13.527787   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.527799   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:13.527806   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:13.527862   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:13.560574   84547 cri.go:89] found id: ""
	I1210 00:02:13.560607   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.560618   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:13.560625   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:13.560688   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:13.595507   84547 cri.go:89] found id: ""
	I1210 00:02:13.595551   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.595584   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:13.595592   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:13.595657   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:13.629848   84547 cri.go:89] found id: ""
	I1210 00:02:13.629873   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.629884   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:13.629892   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:13.629952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:13.662407   84547 cri.go:89] found id: ""
	I1210 00:02:13.662436   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.662447   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:13.662454   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:13.662509   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:13.694892   84547 cri.go:89] found id: ""
	I1210 00:02:13.694921   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.694940   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:13.694949   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:13.695013   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:13.733248   84547 cri.go:89] found id: ""
	I1210 00:02:13.733327   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.733349   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:13.733358   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:13.733426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:13.771853   84547 cri.go:89] found id: ""
	I1210 00:02:13.771884   84547 logs.go:282] 0 containers: []
	W1210 00:02:13.771894   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:13.771906   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:13.771920   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:13.846886   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:13.846913   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:13.846928   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:13.929722   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:13.929758   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:13.968401   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:13.968427   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:14.019770   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:14.019811   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:13.197726   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:15.198437   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:13.257271   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:15.755750   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:15.990729   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:18.490851   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:16.532794   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:16.547084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:16.547172   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:16.584046   84547 cri.go:89] found id: ""
	I1210 00:02:16.584073   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.584084   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:16.584091   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:16.584150   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:16.615981   84547 cri.go:89] found id: ""
	I1210 00:02:16.616012   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.616023   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:16.616030   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:16.616094   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:16.647939   84547 cri.go:89] found id: ""
	I1210 00:02:16.647967   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.647979   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:16.647986   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:16.648048   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:16.682585   84547 cri.go:89] found id: ""
	I1210 00:02:16.682620   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.682632   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:16.682640   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:16.682695   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:16.718593   84547 cri.go:89] found id: ""
	I1210 00:02:16.718620   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.718628   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:16.718634   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:16.718687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:16.752513   84547 cri.go:89] found id: ""
	I1210 00:02:16.752536   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.752543   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:16.752549   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:16.752598   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:16.787679   84547 cri.go:89] found id: ""
	I1210 00:02:16.787702   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.787710   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:16.787715   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:16.787777   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:16.821275   84547 cri.go:89] found id: ""
	I1210 00:02:16.821297   84547 logs.go:282] 0 containers: []
	W1210 00:02:16.821305   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:16.821312   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:16.821322   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:16.872500   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:16.872533   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:16.885185   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:16.885212   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:16.962658   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:16.962679   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:16.962694   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:17.039689   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:17.039726   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:19.578060   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:19.590601   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:19.590675   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:19.622145   84547 cri.go:89] found id: ""
	I1210 00:02:19.622170   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.622179   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:19.622184   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:19.622231   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:19.653204   84547 cri.go:89] found id: ""
	I1210 00:02:19.653232   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.653243   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:19.653250   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:19.653317   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:19.685104   84547 cri.go:89] found id: ""
	I1210 00:02:19.685137   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.685148   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:19.685156   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:19.685213   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:19.719087   84547 cri.go:89] found id: ""
	I1210 00:02:19.719113   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.719121   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:19.719126   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:19.719176   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:19.753210   84547 cri.go:89] found id: ""
	I1210 00:02:19.753239   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.753250   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:19.753258   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:19.753317   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:19.787608   84547 cri.go:89] found id: ""
	I1210 00:02:19.787635   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.787645   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:19.787653   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:19.787718   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:19.823111   84547 cri.go:89] found id: ""
	I1210 00:02:19.823142   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.823154   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:19.823161   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:19.823221   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:19.858274   84547 cri.go:89] found id: ""
	I1210 00:02:19.858301   84547 logs.go:282] 0 containers: []
	W1210 00:02:19.858312   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:19.858323   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:19.858344   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:19.905386   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:19.905420   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:19.918995   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:19.919026   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:19.990676   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:19.990700   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:19.990716   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:20.064396   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:20.064435   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:17.698109   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:20.197963   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:17.756633   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:19.757487   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:22.257036   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:20.990682   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:23.490383   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:22.604477   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:22.617408   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:22.617487   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:22.650144   84547 cri.go:89] found id: ""
	I1210 00:02:22.650178   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.650189   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:22.650197   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:22.650293   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:22.684342   84547 cri.go:89] found id: ""
	I1210 00:02:22.684367   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.684375   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:22.684380   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:22.684429   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:22.718168   84547 cri.go:89] found id: ""
	I1210 00:02:22.718194   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.718204   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:22.718211   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:22.718271   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:22.755192   84547 cri.go:89] found id: ""
	I1210 00:02:22.755222   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.755232   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:22.755240   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:22.755297   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:22.793095   84547 cri.go:89] found id: ""
	I1210 00:02:22.793129   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.793141   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:22.793149   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:22.793209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:22.831116   84547 cri.go:89] found id: ""
	I1210 00:02:22.831146   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.831157   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:22.831164   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:22.831235   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:22.869652   84547 cri.go:89] found id: ""
	I1210 00:02:22.869686   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.869704   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:22.869709   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:22.869756   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:22.907480   84547 cri.go:89] found id: ""
	I1210 00:02:22.907504   84547 logs.go:282] 0 containers: []
	W1210 00:02:22.907513   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:22.907520   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:22.907533   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:22.983880   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:22.983902   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:22.983915   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:23.062840   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:23.062880   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:23.101427   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:23.101476   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:23.153861   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:23.153893   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:22.198638   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:24.698708   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:24.757526   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:27.256296   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:25.491835   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:27.990755   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:25.666908   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:25.680047   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:25.680125   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:25.715821   84547 cri.go:89] found id: ""
	I1210 00:02:25.715853   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.715865   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:25.715873   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:25.715931   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:25.748191   84547 cri.go:89] found id: ""
	I1210 00:02:25.748222   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.748232   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:25.748239   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:25.748295   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:25.786474   84547 cri.go:89] found id: ""
	I1210 00:02:25.786498   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.786505   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:25.786511   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:25.786569   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:25.819282   84547 cri.go:89] found id: ""
	I1210 00:02:25.819319   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.819330   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:25.819337   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:25.819400   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:25.858064   84547 cri.go:89] found id: ""
	I1210 00:02:25.858091   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.858100   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:25.858106   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:25.858169   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:25.893335   84547 cri.go:89] found id: ""
	I1210 00:02:25.893362   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.893373   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:25.893380   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:25.893439   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:25.927225   84547 cri.go:89] found id: ""
	I1210 00:02:25.927254   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.927265   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:25.927272   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:25.927341   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:25.963678   84547 cri.go:89] found id: ""
	I1210 00:02:25.963715   84547 logs.go:282] 0 containers: []
	W1210 00:02:25.963725   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:25.963738   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:25.963756   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:25.994462   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:25.994488   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:26.061394   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:26.061442   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:26.061458   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:26.135152   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:26.135187   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:26.171961   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:26.171994   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:28.723326   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:28.735721   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:28.735787   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:28.769493   84547 cri.go:89] found id: ""
	I1210 00:02:28.769516   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.769525   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:28.769530   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:28.769582   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:28.805594   84547 cri.go:89] found id: ""
	I1210 00:02:28.805641   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.805652   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:28.805658   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:28.805704   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:28.838982   84547 cri.go:89] found id: ""
	I1210 00:02:28.839007   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.839015   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:28.839020   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:28.839072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:28.876607   84547 cri.go:89] found id: ""
	I1210 00:02:28.876627   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.876635   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:28.876641   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:28.876700   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:28.910352   84547 cri.go:89] found id: ""
	I1210 00:02:28.910379   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.910386   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:28.910392   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:28.910464   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:28.943896   84547 cri.go:89] found id: ""
	I1210 00:02:28.943917   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.943924   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:28.943930   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:28.943978   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:28.976554   84547 cri.go:89] found id: ""
	I1210 00:02:28.976583   84547 logs.go:282] 0 containers: []
	W1210 00:02:28.976590   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:28.976596   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:28.976644   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:29.011334   84547 cri.go:89] found id: ""
	I1210 00:02:29.011357   84547 logs.go:282] 0 containers: []
	W1210 00:02:29.011364   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:29.011372   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:29.011384   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:29.061418   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:29.061464   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:29.074887   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:29.074913   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:29.147240   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:29.147261   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:29.147272   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:29.225058   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:29.225094   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:27.197730   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:29.197818   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:31.198788   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:29.256802   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:31.258194   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:30.490822   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:32.990271   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:31.763432   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:31.777257   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:31.777359   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:31.812558   84547 cri.go:89] found id: ""
	I1210 00:02:31.812588   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.812598   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:31.812606   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:31.812668   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:31.847042   84547 cri.go:89] found id: ""
	I1210 00:02:31.847065   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.847082   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:31.847088   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:31.847135   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:31.881182   84547 cri.go:89] found id: ""
	I1210 00:02:31.881208   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.881216   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:31.881221   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:31.881272   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:31.917367   84547 cri.go:89] found id: ""
	I1210 00:02:31.917393   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.917401   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:31.917407   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:31.917454   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:31.956836   84547 cri.go:89] found id: ""
	I1210 00:02:31.956868   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.956883   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:31.956893   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:31.956952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:31.993125   84547 cri.go:89] found id: ""
	I1210 00:02:31.993151   84547 logs.go:282] 0 containers: []
	W1210 00:02:31.993160   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:31.993168   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:31.993225   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:32.031648   84547 cri.go:89] found id: ""
	I1210 00:02:32.031679   84547 logs.go:282] 0 containers: []
	W1210 00:02:32.031687   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:32.031692   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:32.031746   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:32.065894   84547 cri.go:89] found id: ""
	I1210 00:02:32.065923   84547 logs.go:282] 0 containers: []
	W1210 00:02:32.065930   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:32.065941   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:32.065957   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:32.133473   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:32.133496   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:32.133508   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:32.213129   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:32.213161   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:32.251424   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:32.251453   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:32.302284   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:32.302323   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:34.815963   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:34.829460   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:34.829543   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:34.865811   84547 cri.go:89] found id: ""
	I1210 00:02:34.865837   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.865847   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:34.865854   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:34.865916   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:34.899185   84547 cri.go:89] found id: ""
	I1210 00:02:34.899211   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.899220   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:34.899227   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:34.899289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:34.932472   84547 cri.go:89] found id: ""
	I1210 00:02:34.932500   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.932509   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:34.932517   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:34.932581   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:34.965817   84547 cri.go:89] found id: ""
	I1210 00:02:34.965846   84547 logs.go:282] 0 containers: []
	W1210 00:02:34.965857   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:34.965866   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:34.965930   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:35.000036   84547 cri.go:89] found id: ""
	I1210 00:02:35.000066   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.000077   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:35.000084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:35.000139   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:35.033808   84547 cri.go:89] found id: ""
	I1210 00:02:35.033839   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.033850   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:35.033857   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:35.033916   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:35.066242   84547 cri.go:89] found id: ""
	I1210 00:02:35.066269   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.066278   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:35.066285   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:35.066349   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:35.100740   84547 cri.go:89] found id: ""
	I1210 00:02:35.100763   84547 logs.go:282] 0 containers: []
	W1210 00:02:35.100771   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:35.100779   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:35.100792   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:35.155483   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:35.155520   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:35.168910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:35.168939   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:35.243234   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:35.243252   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:35.243263   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:35.320622   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:35.320657   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:33.698543   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:36.198250   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:33.756108   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:35.757682   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:34.990969   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:37.495166   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:37.855684   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:37.869056   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:37.869156   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:37.903720   84547 cri.go:89] found id: ""
	I1210 00:02:37.903748   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.903759   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:37.903766   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:37.903859   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:37.937741   84547 cri.go:89] found id: ""
	I1210 00:02:37.937780   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.937791   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:37.937808   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:37.937869   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:37.975255   84547 cri.go:89] found id: ""
	I1210 00:02:37.975281   84547 logs.go:282] 0 containers: []
	W1210 00:02:37.975292   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:37.975299   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:37.975359   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:38.017348   84547 cri.go:89] found id: ""
	I1210 00:02:38.017381   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.017393   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:38.017400   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:38.017460   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:38.051039   84547 cri.go:89] found id: ""
	I1210 00:02:38.051065   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.051073   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:38.051079   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:38.051129   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:38.085752   84547 cri.go:89] found id: ""
	I1210 00:02:38.085782   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.085791   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:38.085799   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:38.085858   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:38.119975   84547 cri.go:89] found id: ""
	I1210 00:02:38.120004   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.120014   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:38.120021   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:38.120086   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:38.161467   84547 cri.go:89] found id: ""
	I1210 00:02:38.161499   84547 logs.go:282] 0 containers: []
	W1210 00:02:38.161526   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:38.161537   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:38.161551   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:38.222277   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:38.222314   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:38.239300   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:38.239332   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:38.308997   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:38.309016   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:38.309032   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:38.394064   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:38.394108   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:38.198484   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:40.697916   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:38.257723   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:40.756530   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:39.990445   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:41.993952   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:40.933406   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:40.945862   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:40.945937   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:40.979503   84547 cri.go:89] found id: ""
	I1210 00:02:40.979532   84547 logs.go:282] 0 containers: []
	W1210 00:02:40.979540   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:40.979545   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:40.979619   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:41.016758   84547 cri.go:89] found id: ""
	I1210 00:02:41.016792   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.016803   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:41.016811   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:41.016873   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:41.053562   84547 cri.go:89] found id: ""
	I1210 00:02:41.053593   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.053601   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:41.053607   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:41.053667   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:41.086713   84547 cri.go:89] found id: ""
	I1210 00:02:41.086745   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.086757   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:41.086767   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:41.086830   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:41.122901   84547 cri.go:89] found id: ""
	I1210 00:02:41.122935   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.122945   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:41.122952   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:41.123011   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:41.156306   84547 cri.go:89] found id: ""
	I1210 00:02:41.156337   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.156355   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:41.156362   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:41.156423   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:41.189840   84547 cri.go:89] found id: ""
	I1210 00:02:41.189871   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.189882   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:41.189890   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:41.189946   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:41.223019   84547 cri.go:89] found id: ""
	I1210 00:02:41.223051   84547 logs.go:282] 0 containers: []
	W1210 00:02:41.223061   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:41.223072   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:41.223088   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:41.275608   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:41.275640   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:41.289181   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:41.289210   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:41.358375   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:41.358404   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:41.358420   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:41.440214   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:41.440250   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:43.980600   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:43.993110   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:43.993165   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:44.026688   84547 cri.go:89] found id: ""
	I1210 00:02:44.026721   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.026732   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:44.026741   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:44.026796   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:44.062914   84547 cri.go:89] found id: ""
	I1210 00:02:44.062936   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.062943   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:44.062948   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:44.062999   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:44.105974   84547 cri.go:89] found id: ""
	I1210 00:02:44.106001   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.106009   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:44.106014   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:44.106061   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:44.140239   84547 cri.go:89] found id: ""
	I1210 00:02:44.140265   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.140274   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:44.140280   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:44.140338   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:44.175754   84547 cri.go:89] found id: ""
	I1210 00:02:44.175785   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.175796   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:44.175803   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:44.175870   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:44.211663   84547 cri.go:89] found id: ""
	I1210 00:02:44.211694   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.211705   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:44.211712   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:44.211776   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:44.244796   84547 cri.go:89] found id: ""
	I1210 00:02:44.244821   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.244831   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:44.244837   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:44.244898   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:44.282491   84547 cri.go:89] found id: ""
	I1210 00:02:44.282515   84547 logs.go:282] 0 containers: []
	W1210 00:02:44.282528   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:44.282549   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:44.282562   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:44.335284   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:44.335328   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:44.349489   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:44.349530   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:44.418643   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:44.418668   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:44.418682   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:44.493901   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:44.493932   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:43.197494   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:45.198341   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:43.256947   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:45.257225   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:47.258073   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:44.491413   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:46.990521   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:48.991847   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:47.033132   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:47.046322   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:47.046403   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:47.078490   84547 cri.go:89] found id: ""
	I1210 00:02:47.078521   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.078533   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:47.078541   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:47.078602   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:47.111457   84547 cri.go:89] found id: ""
	I1210 00:02:47.111479   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.111487   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:47.111492   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:47.111538   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:47.144643   84547 cri.go:89] found id: ""
	I1210 00:02:47.144678   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.144689   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:47.144696   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:47.144757   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:47.178106   84547 cri.go:89] found id: ""
	I1210 00:02:47.178131   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.178141   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:47.178148   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:47.178213   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:47.215670   84547 cri.go:89] found id: ""
	I1210 00:02:47.215697   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.215712   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:47.215718   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:47.215767   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:47.248916   84547 cri.go:89] found id: ""
	I1210 00:02:47.248941   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.248948   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:47.248953   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:47.249002   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:47.287632   84547 cri.go:89] found id: ""
	I1210 00:02:47.287660   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.287671   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:47.287680   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:47.287745   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:47.327064   84547 cri.go:89] found id: ""
	I1210 00:02:47.327094   84547 logs.go:282] 0 containers: []
	W1210 00:02:47.327103   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:47.327112   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:47.327126   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:47.341132   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:47.341176   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:47.417100   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:47.417121   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:47.417134   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:47.502612   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:47.502648   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:47.541312   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:47.541339   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:50.095403   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:50.108145   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:50.108202   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:50.140424   84547 cri.go:89] found id: ""
	I1210 00:02:50.140451   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.140462   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:50.140472   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:50.140532   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:50.173836   84547 cri.go:89] found id: ""
	I1210 00:02:50.173859   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.173866   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:50.173872   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:50.173928   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:50.213916   84547 cri.go:89] found id: ""
	I1210 00:02:50.213937   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.213944   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:50.213949   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:50.213997   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:50.246854   84547 cri.go:89] found id: ""
	I1210 00:02:50.246889   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.246899   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:50.246907   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:50.246956   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:50.281416   84547 cri.go:89] found id: ""
	I1210 00:02:50.281448   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.281456   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:50.281462   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:50.281511   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:50.313263   84547 cri.go:89] found id: ""
	I1210 00:02:50.313296   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.313308   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:50.313318   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:50.313385   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:50.347421   84547 cri.go:89] found id: ""
	I1210 00:02:50.347453   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.347463   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:50.347470   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:50.347544   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:50.383111   84547 cri.go:89] found id: ""
	I1210 00:02:50.383134   84547 logs.go:282] 0 containers: []
	W1210 00:02:50.383142   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:50.383151   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:50.383162   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:50.421982   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:50.422013   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:50.475478   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:50.475520   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:50.489202   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:50.489256   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:02:47.199043   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:49.698055   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:51.698813   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:49.756585   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:51.757407   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:51.489831   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:53.490572   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	W1210 00:02:50.559501   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:50.559539   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:50.559552   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:53.136042   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:53.149149   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:53.149227   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:53.189323   84547 cri.go:89] found id: ""
	I1210 00:02:53.189349   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.189357   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:53.189365   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:53.189425   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:53.225235   84547 cri.go:89] found id: ""
	I1210 00:02:53.225269   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.225281   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:53.225288   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:53.225347   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:53.263465   84547 cri.go:89] found id: ""
	I1210 00:02:53.263492   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.263502   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:53.263510   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:53.263597   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:53.297544   84547 cri.go:89] found id: ""
	I1210 00:02:53.297571   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.297583   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:53.297591   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:53.297656   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:53.331731   84547 cri.go:89] found id: ""
	I1210 00:02:53.331755   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.331762   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:53.331767   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:53.331815   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:53.367395   84547 cri.go:89] found id: ""
	I1210 00:02:53.367427   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.367440   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:53.367447   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:53.367511   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:53.403297   84547 cri.go:89] found id: ""
	I1210 00:02:53.403324   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.403332   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:53.403338   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:53.403398   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:53.437126   84547 cri.go:89] found id: ""
	I1210 00:02:53.437150   84547 logs.go:282] 0 containers: []
	W1210 00:02:53.437158   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:53.437166   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:53.437177   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:53.489875   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:53.489914   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:53.503915   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:53.503940   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:53.578086   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:53.578114   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:53.578127   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:53.658463   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:53.658501   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:53.699332   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:56.197941   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:54.257053   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:56.258352   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:55.990548   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:58.489947   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:56.197093   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:56.211959   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:56.212020   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:56.250462   84547 cri.go:89] found id: ""
	I1210 00:02:56.250484   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.250493   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:56.250498   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:56.250552   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:56.286828   84547 cri.go:89] found id: ""
	I1210 00:02:56.286854   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.286865   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:56.286872   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:56.286939   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:56.320750   84547 cri.go:89] found id: ""
	I1210 00:02:56.320779   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.320787   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:56.320793   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:56.320840   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:56.355903   84547 cri.go:89] found id: ""
	I1210 00:02:56.355943   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.355954   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:56.355960   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:56.356026   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:56.395979   84547 cri.go:89] found id: ""
	I1210 00:02:56.396007   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.396018   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:56.396025   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:56.396081   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:56.431481   84547 cri.go:89] found id: ""
	I1210 00:02:56.431506   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.431514   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:56.431520   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:56.431594   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:56.470318   84547 cri.go:89] found id: ""
	I1210 00:02:56.470349   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.470356   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:56.470361   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:56.470410   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:56.506388   84547 cri.go:89] found id: ""
	I1210 00:02:56.506411   84547 logs.go:282] 0 containers: []
	W1210 00:02:56.506418   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:56.506429   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:56.506441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:56.558438   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:56.558483   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:56.571981   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:56.572004   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:56.634665   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:56.634697   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:56.634713   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:56.715663   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:56.715697   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:59.255305   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:02:59.268846   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:02:59.268930   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:02:59.302334   84547 cri.go:89] found id: ""
	I1210 00:02:59.302359   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.302366   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:02:59.302372   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:02:59.302426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:02:59.336380   84547 cri.go:89] found id: ""
	I1210 00:02:59.336409   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.336419   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:02:59.336425   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:02:59.336492   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:02:59.370179   84547 cri.go:89] found id: ""
	I1210 00:02:59.370201   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.370210   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:02:59.370214   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:02:59.370268   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:02:59.403199   84547 cri.go:89] found id: ""
	I1210 00:02:59.403222   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.403229   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:02:59.403236   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:02:59.403307   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:02:59.438650   84547 cri.go:89] found id: ""
	I1210 00:02:59.438673   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.438681   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:02:59.438686   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:02:59.438736   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:02:59.473166   84547 cri.go:89] found id: ""
	I1210 00:02:59.473191   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.473199   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:02:59.473205   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:02:59.473264   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:02:59.506857   84547 cri.go:89] found id: ""
	I1210 00:02:59.506879   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.506888   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:02:59.506902   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:02:59.506963   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:02:59.551488   84547 cri.go:89] found id: ""
	I1210 00:02:59.551515   84547 logs.go:282] 0 containers: []
	W1210 00:02:59.551526   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:02:59.551542   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:02:59.551557   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:02:59.605032   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:02:59.605069   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:02:59.619238   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:02:59.619271   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:02:59.690772   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:02:59.690798   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:02:59.690813   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:02:59.774424   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:02:59.774460   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:02:58.198128   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:00.698106   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:02:58.756596   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:01.257970   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:00.490268   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:02.990951   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:02.315240   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:02.329636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:02.329728   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:02.368571   84547 cri.go:89] found id: ""
	I1210 00:03:02.368599   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.368609   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:02.368621   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:02.368687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:02.406107   84547 cri.go:89] found id: ""
	I1210 00:03:02.406136   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.406148   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:02.406155   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:02.406219   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:02.443050   84547 cri.go:89] found id: ""
	I1210 00:03:02.443077   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.443091   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:02.443098   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:02.443146   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:02.484425   84547 cri.go:89] found id: ""
	I1210 00:03:02.484451   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.484461   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:02.484469   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:02.484536   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:02.525624   84547 cri.go:89] found id: ""
	I1210 00:03:02.525647   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.525655   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:02.525661   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:02.525711   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:02.564808   84547 cri.go:89] found id: ""
	I1210 00:03:02.564839   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.564850   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:02.564856   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:02.564907   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:02.598322   84547 cri.go:89] found id: ""
	I1210 00:03:02.598346   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.598354   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:02.598359   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:02.598417   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:02.632256   84547 cri.go:89] found id: ""
	I1210 00:03:02.632310   84547 logs.go:282] 0 containers: []
	W1210 00:03:02.632322   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:02.632334   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:02.632348   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:02.686025   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:02.686063   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:02.701214   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:02.701237   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:02.773453   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:02.773477   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:02.773491   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:02.862017   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:02.862068   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:05.401014   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:05.415009   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:05.415072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:05.454916   84547 cri.go:89] found id: ""
	I1210 00:03:05.454945   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.454955   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:05.454962   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:05.455023   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:05.492955   84547 cri.go:89] found id: ""
	I1210 00:03:05.492981   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.492988   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:05.492995   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:05.493046   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:02.698652   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.196840   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:03.755858   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.757395   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.489875   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:07.990053   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:05.539646   84547 cri.go:89] found id: ""
	I1210 00:03:05.539670   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.539683   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:05.539690   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:05.539755   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:05.574534   84547 cri.go:89] found id: ""
	I1210 00:03:05.574559   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.574567   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:05.574572   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:05.574632   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:05.608141   84547 cri.go:89] found id: ""
	I1210 00:03:05.608166   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.608174   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:05.608180   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:05.608243   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:05.643725   84547 cri.go:89] found id: ""
	I1210 00:03:05.643751   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.643759   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:05.643765   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:05.643812   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:05.677730   84547 cri.go:89] found id: ""
	I1210 00:03:05.677760   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.677772   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:05.677779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:05.677846   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:05.712475   84547 cri.go:89] found id: ""
	I1210 00:03:05.712507   84547 logs.go:282] 0 containers: []
	W1210 00:03:05.712519   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:05.712531   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:05.712548   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:05.764532   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:05.764569   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:05.777910   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:05.777938   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:05.851070   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:05.851099   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:05.851114   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:05.929518   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:05.929554   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:08.465166   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:08.478666   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:08.478723   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:08.513763   84547 cri.go:89] found id: ""
	I1210 00:03:08.513795   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.513808   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:08.513816   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:08.513877   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:08.548272   84547 cri.go:89] found id: ""
	I1210 00:03:08.548299   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.548306   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:08.548313   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:08.548397   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:08.581773   84547 cri.go:89] found id: ""
	I1210 00:03:08.581801   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.581812   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:08.581820   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:08.581884   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:08.613625   84547 cri.go:89] found id: ""
	I1210 00:03:08.613655   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.613664   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:08.613669   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:08.613716   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:08.644618   84547 cri.go:89] found id: ""
	I1210 00:03:08.644644   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.644652   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:08.644658   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:08.644717   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:08.676840   84547 cri.go:89] found id: ""
	I1210 00:03:08.676867   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.676879   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:08.676887   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:08.676952   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:08.709918   84547 cri.go:89] found id: ""
	I1210 00:03:08.709952   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.709961   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:08.709969   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:08.710029   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:08.740842   84547 cri.go:89] found id: ""
	I1210 00:03:08.740871   84547 logs.go:282] 0 containers: []
	W1210 00:03:08.740882   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:08.740893   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:08.740907   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:08.808679   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:08.808705   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:08.808721   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:08.884692   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:08.884733   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:08.920922   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:08.920952   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:08.971404   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:08.971441   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:07.197548   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:09.197749   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:11.198894   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:08.258477   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:10.755482   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:10.490495   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:12.990709   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:11.484905   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:11.498791   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:11.498865   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:11.535784   84547 cri.go:89] found id: ""
	I1210 00:03:11.535809   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.535820   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:11.535827   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:11.535890   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:11.567981   84547 cri.go:89] found id: ""
	I1210 00:03:11.568006   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.568019   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:11.568026   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:11.568083   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:11.601188   84547 cri.go:89] found id: ""
	I1210 00:03:11.601213   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.601224   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:11.601231   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:11.601289   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:11.635759   84547 cri.go:89] found id: ""
	I1210 00:03:11.635787   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.635798   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:11.635807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:11.635867   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:11.675512   84547 cri.go:89] found id: ""
	I1210 00:03:11.675537   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.675547   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:11.675554   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:11.675638   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:11.709889   84547 cri.go:89] found id: ""
	I1210 00:03:11.709912   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.709922   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:11.709929   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:11.709987   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:11.744571   84547 cri.go:89] found id: ""
	I1210 00:03:11.744603   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.744610   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:11.744616   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:11.744677   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:11.779091   84547 cri.go:89] found id: ""
	I1210 00:03:11.779123   84547 logs.go:282] 0 containers: []
	W1210 00:03:11.779136   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:11.779146   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:11.779156   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:11.830682   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:11.830726   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:11.844352   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:11.844383   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:11.915025   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:11.915046   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:11.915058   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:11.998581   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:11.998620   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:14.536408   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:14.550250   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:14.550321   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:14.586005   84547 cri.go:89] found id: ""
	I1210 00:03:14.586037   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.586049   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:14.586057   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:14.586118   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:14.620124   84547 cri.go:89] found id: ""
	I1210 00:03:14.620159   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.620170   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:14.620178   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:14.620231   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:14.655079   84547 cri.go:89] found id: ""
	I1210 00:03:14.655104   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.655112   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:14.655117   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:14.655178   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:14.688253   84547 cri.go:89] found id: ""
	I1210 00:03:14.688285   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.688298   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:14.688308   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:14.688371   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:14.726904   84547 cri.go:89] found id: ""
	I1210 00:03:14.726932   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.726940   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:14.726945   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:14.726994   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:14.764836   84547 cri.go:89] found id: ""
	I1210 00:03:14.764868   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.764881   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:14.764890   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:14.764955   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:14.803557   84547 cri.go:89] found id: ""
	I1210 00:03:14.803605   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.803616   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:14.803621   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:14.803674   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:14.841070   84547 cri.go:89] found id: ""
	I1210 00:03:14.841102   84547 logs.go:282] 0 containers: []
	W1210 00:03:14.841122   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:14.841137   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:14.841161   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:14.907607   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:14.907631   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:14.907644   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:14.985179   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:14.985209   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:15.022654   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:15.022687   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:15.075224   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:15.075260   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:13.697072   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:15.697892   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:12.757232   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:14.758529   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:17.256541   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:15.490871   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:17.990140   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:17.589836   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:17.604045   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:17.604102   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:17.643211   84547 cri.go:89] found id: ""
	I1210 00:03:17.643241   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.643251   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:17.643260   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:17.643320   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:17.675436   84547 cri.go:89] found id: ""
	I1210 00:03:17.675462   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.675472   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:17.675479   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:17.675538   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:17.709428   84547 cri.go:89] found id: ""
	I1210 00:03:17.709465   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.709476   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:17.709484   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:17.709544   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:17.764088   84547 cri.go:89] found id: ""
	I1210 00:03:17.764121   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.764132   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:17.764139   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:17.764200   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:17.797908   84547 cri.go:89] found id: ""
	I1210 00:03:17.797933   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.797944   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:17.797953   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:17.798016   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:17.829580   84547 cri.go:89] found id: ""
	I1210 00:03:17.829609   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.829620   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:17.829628   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:17.829687   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:17.865106   84547 cri.go:89] found id: ""
	I1210 00:03:17.865136   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.865145   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:17.865150   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:17.865200   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:17.896757   84547 cri.go:89] found id: ""
	I1210 00:03:17.896791   84547 logs.go:282] 0 containers: []
	W1210 00:03:17.896803   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:17.896814   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:17.896830   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:17.977210   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:17.977244   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:18.016843   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:18.016867   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:18.065263   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:18.065294   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:18.078188   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:18.078212   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:18.148164   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:17.698675   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:20.199732   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:19.755949   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:21.756959   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:19.990714   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:21.990766   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:23.991443   83900 pod_ready.go:103] pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:23.991468   83900 pod_ready.go:82] duration metric: took 4m0.007840011s for pod "metrics-server-6867b74b74-hg7c5" in "kube-system" namespace to be "Ready" ...
	E1210 00:03:23.991477   83900 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1210 00:03:23.991484   83900 pod_ready.go:39] duration metric: took 4m6.196076299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:03:23.991501   83900 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:03:23.991534   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:23.991610   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:24.032198   83900 cri.go:89] found id: "07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:24.032227   83900 cri.go:89] found id: ""
	I1210 00:03:24.032237   83900 logs.go:282] 1 containers: [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7]
	I1210 00:03:24.032303   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.037671   83900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:24.037746   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:24.076467   83900 cri.go:89] found id: "c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:24.076499   83900 cri.go:89] found id: ""
	I1210 00:03:24.076507   83900 logs.go:282] 1 containers: [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9]
	I1210 00:03:24.076557   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.081125   83900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:24.081193   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:24.125433   83900 cri.go:89] found id: "db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:24.125472   83900 cri.go:89] found id: ""
	I1210 00:03:24.125483   83900 logs.go:282] 1 containers: [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71]
	I1210 00:03:24.125542   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.131023   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:24.131097   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:24.173215   83900 cri.go:89] found id: "f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:24.173239   83900 cri.go:89] found id: ""
	I1210 00:03:24.173247   83900 logs.go:282] 1 containers: [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de]
	I1210 00:03:24.173303   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.179027   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:24.179108   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:24.228905   83900 cri.go:89] found id: "a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:24.228936   83900 cri.go:89] found id: ""
	I1210 00:03:24.228946   83900 logs.go:282] 1 containers: [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994]
	I1210 00:03:24.229004   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.234441   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:24.234520   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:20.648921   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:20.661458   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:20.661516   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:20.693473   84547 cri.go:89] found id: ""
	I1210 00:03:20.693509   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.693518   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:20.693524   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:20.693576   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:20.726279   84547 cri.go:89] found id: ""
	I1210 00:03:20.726301   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.726309   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:20.726314   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:20.726375   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:20.759895   84547 cri.go:89] found id: ""
	I1210 00:03:20.759922   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.759931   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:20.759936   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:20.759988   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:20.793934   84547 cri.go:89] found id: ""
	I1210 00:03:20.793964   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.793974   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:20.793982   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:20.794049   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:20.828040   84547 cri.go:89] found id: ""
	I1210 00:03:20.828066   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.828077   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:20.828084   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:20.828143   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:20.861917   84547 cri.go:89] found id: ""
	I1210 00:03:20.861950   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.861960   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:20.861967   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:20.862028   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:20.896458   84547 cri.go:89] found id: ""
	I1210 00:03:20.896481   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.896489   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:20.896494   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:20.896551   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:20.931014   84547 cri.go:89] found id: ""
	I1210 00:03:20.931044   84547 logs.go:282] 0 containers: []
	W1210 00:03:20.931052   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:20.931061   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:20.931072   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:20.968693   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:20.968718   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:21.021880   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:21.021917   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:21.035848   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:21.035881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:21.104570   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:21.104604   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:21.104621   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:23.679447   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:23.692326   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:23.692426   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:23.729773   84547 cri.go:89] found id: ""
	I1210 00:03:23.729796   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.729804   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:23.729809   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:23.729855   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:23.765875   84547 cri.go:89] found id: ""
	I1210 00:03:23.765905   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.765915   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:23.765922   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:23.765984   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:23.800785   84547 cri.go:89] found id: ""
	I1210 00:03:23.800821   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.800831   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:23.800838   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:23.800902   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:23.838137   84547 cri.go:89] found id: ""
	I1210 00:03:23.838160   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.838168   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:23.838173   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:23.838222   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:23.869916   84547 cri.go:89] found id: ""
	I1210 00:03:23.869947   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.869958   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:23.869966   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:23.870027   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:23.903939   84547 cri.go:89] found id: ""
	I1210 00:03:23.903962   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.903969   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:23.903975   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:23.904021   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:23.937092   84547 cri.go:89] found id: ""
	I1210 00:03:23.937119   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.937127   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:23.937133   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:23.937194   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:23.970562   84547 cri.go:89] found id: ""
	I1210 00:03:23.970599   84547 logs.go:282] 0 containers: []
	W1210 00:03:23.970611   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:23.970622   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:23.970641   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:23.983364   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:23.983394   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:24.066624   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:24.066645   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:24.066657   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:24.151466   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:24.151502   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:24.204590   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:24.204615   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:22.698354   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:24.698462   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:24.256922   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:26.755965   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:24.279840   83900 cri.go:89] found id: "a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:24.279859   83900 cri.go:89] found id: ""
	I1210 00:03:24.279866   83900 logs.go:282] 1 containers: [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538]
	I1210 00:03:24.279922   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.283898   83900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:24.283972   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:24.319541   83900 cri.go:89] found id: ""
	I1210 00:03:24.319598   83900 logs.go:282] 0 containers: []
	W1210 00:03:24.319612   83900 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:24.319624   83900 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:03:24.319686   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:03:24.356205   83900 cri.go:89] found id: "b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:24.356228   83900 cri.go:89] found id: "e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:24.356234   83900 cri.go:89] found id: ""
	I1210 00:03:24.356242   83900 logs.go:282] 2 containers: [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1]
	I1210 00:03:24.356302   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.361772   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:24.366656   83900 logs.go:123] Gathering logs for kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] ...
	I1210 00:03:24.366690   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:24.408922   83900 logs.go:123] Gathering logs for kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] ...
	I1210 00:03:24.408955   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:24.466720   83900 logs.go:123] Gathering logs for storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] ...
	I1210 00:03:24.466762   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:24.500254   83900 logs.go:123] Gathering logs for storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] ...
	I1210 00:03:24.500290   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:24.535025   83900 logs.go:123] Gathering logs for container status ...
	I1210 00:03:24.535058   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:24.575706   83900 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:24.575732   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:24.645227   83900 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:24.645265   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:24.658833   83900 logs.go:123] Gathering logs for kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] ...
	I1210 00:03:24.658878   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:24.702076   83900 logs.go:123] Gathering logs for kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] ...
	I1210 00:03:24.702107   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:24.738869   83900 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:24.738900   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:25.204715   83900 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:25.204753   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:03:25.320267   83900 logs.go:123] Gathering logs for etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] ...
	I1210 00:03:25.320315   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:25.370599   83900 logs.go:123] Gathering logs for coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] ...
	I1210 00:03:25.370635   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:27.910863   83900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:27.925070   83900 api_server.go:72] duration metric: took 4m17.856306845s to wait for apiserver process to appear ...
	I1210 00:03:27.925098   83900 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:03:27.925164   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:27.925227   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:27.959991   83900 cri.go:89] found id: "07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:27.960017   83900 cri.go:89] found id: ""
	I1210 00:03:27.960026   83900 logs.go:282] 1 containers: [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7]
	I1210 00:03:27.960081   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:27.964069   83900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:27.964128   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:27.997618   83900 cri.go:89] found id: "c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:27.997640   83900 cri.go:89] found id: ""
	I1210 00:03:27.997650   83900 logs.go:282] 1 containers: [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9]
	I1210 00:03:27.997710   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.001497   83900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:28.001570   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:28.034858   83900 cri.go:89] found id: "db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:28.034883   83900 cri.go:89] found id: ""
	I1210 00:03:28.034891   83900 logs.go:282] 1 containers: [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71]
	I1210 00:03:28.034953   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.038775   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:28.038852   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:28.074473   83900 cri.go:89] found id: "f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:28.074497   83900 cri.go:89] found id: ""
	I1210 00:03:28.074506   83900 logs.go:282] 1 containers: [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de]
	I1210 00:03:28.074568   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.079082   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:28.079149   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:28.113948   83900 cri.go:89] found id: "a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:28.113971   83900 cri.go:89] found id: ""
	I1210 00:03:28.113981   83900 logs.go:282] 1 containers: [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994]
	I1210 00:03:28.114045   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.118930   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:28.118982   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:28.162794   83900 cri.go:89] found id: "a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:28.162820   83900 cri.go:89] found id: ""
	I1210 00:03:28.162827   83900 logs.go:282] 1 containers: [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538]
	I1210 00:03:28.162872   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.166637   83900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:28.166715   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:28.206591   83900 cri.go:89] found id: ""
	I1210 00:03:28.206619   83900 logs.go:282] 0 containers: []
	W1210 00:03:28.206627   83900 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:28.206633   83900 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:03:28.206693   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:03:28.251722   83900 cri.go:89] found id: "b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:28.251748   83900 cri.go:89] found id: "e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:28.251754   83900 cri.go:89] found id: ""
	I1210 00:03:28.251763   83900 logs.go:282] 2 containers: [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1]
	I1210 00:03:28.251821   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.256785   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:28.260355   83900 logs.go:123] Gathering logs for storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] ...
	I1210 00:03:28.260378   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:28.295801   83900 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:28.295827   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:28.734920   83900 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:28.734963   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:28.804266   83900 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:28.804305   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:28.818223   83900 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:28.818251   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:03:28.931008   83900 logs.go:123] Gathering logs for etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] ...
	I1210 00:03:28.931044   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:28.977544   83900 logs.go:123] Gathering logs for coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] ...
	I1210 00:03:28.977592   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:29.016645   83900 logs.go:123] Gathering logs for kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] ...
	I1210 00:03:29.016679   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:29.052920   83900 logs.go:123] Gathering logs for kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] ...
	I1210 00:03:29.052945   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:29.094464   83900 logs.go:123] Gathering logs for kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] ...
	I1210 00:03:29.094497   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:29.132520   83900 logs.go:123] Gathering logs for kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] ...
	I1210 00:03:29.132545   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:29.180364   83900 logs.go:123] Gathering logs for storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] ...
	I1210 00:03:29.180396   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:29.216627   83900 logs.go:123] Gathering logs for container status ...
	I1210 00:03:29.216657   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:26.774169   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:26.786888   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:26.786964   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:26.820391   84547 cri.go:89] found id: ""
	I1210 00:03:26.820423   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.820433   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:26.820441   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:26.820504   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:26.853927   84547 cri.go:89] found id: ""
	I1210 00:03:26.853955   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.853963   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:26.853971   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:26.854031   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:26.887309   84547 cri.go:89] found id: ""
	I1210 00:03:26.887337   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.887347   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:26.887353   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:26.887415   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:26.923869   84547 cri.go:89] found id: ""
	I1210 00:03:26.923897   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.923908   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:26.923915   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:26.923983   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:26.959919   84547 cri.go:89] found id: ""
	I1210 00:03:26.959952   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.959964   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:26.959971   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:26.960032   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:26.993742   84547 cri.go:89] found id: ""
	I1210 00:03:26.993775   84547 logs.go:282] 0 containers: []
	W1210 00:03:26.993787   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:26.993794   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:26.993853   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:27.026977   84547 cri.go:89] found id: ""
	I1210 00:03:27.027011   84547 logs.go:282] 0 containers: []
	W1210 00:03:27.027021   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:27.027028   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:27.027090   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:27.064135   84547 cri.go:89] found id: ""
	I1210 00:03:27.064164   84547 logs.go:282] 0 containers: []
	W1210 00:03:27.064173   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:27.064181   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:27.064192   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:27.101758   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:27.101784   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:27.155536   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:27.155592   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:27.168891   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:27.168914   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:27.242486   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:27.242511   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:27.242525   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:29.821740   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:29.834536   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:29.834604   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:29.868316   84547 cri.go:89] found id: ""
	I1210 00:03:29.868341   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.868348   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:29.868354   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:29.868402   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:29.902290   84547 cri.go:89] found id: ""
	I1210 00:03:29.902320   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.902331   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:29.902338   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:29.902399   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:29.948750   84547 cri.go:89] found id: ""
	I1210 00:03:29.948780   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.948792   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:29.948800   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:29.948864   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:29.994587   84547 cri.go:89] found id: ""
	I1210 00:03:29.994618   84547 logs.go:282] 0 containers: []
	W1210 00:03:29.994629   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:29.994636   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:29.994694   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:30.042216   84547 cri.go:89] found id: ""
	I1210 00:03:30.042243   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.042256   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:30.042264   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:30.042345   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:30.075964   84547 cri.go:89] found id: ""
	I1210 00:03:30.075989   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.075999   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:30.076007   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:30.076072   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:30.108647   84547 cri.go:89] found id: ""
	I1210 00:03:30.108676   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.108687   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:30.108695   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:30.108760   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:30.140983   84547 cri.go:89] found id: ""
	I1210 00:03:30.141013   84547 logs.go:282] 0 containers: []
	W1210 00:03:30.141022   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:30.141030   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:30.141040   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:30.198281   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:30.198319   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:30.213820   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:30.213849   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:30.283907   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:30.283927   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:30.283940   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:30.360731   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:30.360768   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:27.197512   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:29.698238   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:31.698679   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:28.757444   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:31.255481   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:31.759061   83900 api_server.go:253] Checking apiserver healthz at https://192.168.50.19:8443/healthz ...
	I1210 00:03:31.763064   83900 api_server.go:279] https://192.168.50.19:8443/healthz returned 200:
	ok
	I1210 00:03:31.763974   83900 api_server.go:141] control plane version: v1.31.2
	I1210 00:03:31.763992   83900 api_server.go:131] duration metric: took 3.83888731s to wait for apiserver health ...
	I1210 00:03:31.763999   83900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:03:31.764021   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:31.764077   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:31.798282   83900 cri.go:89] found id: "07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:31.798312   83900 cri.go:89] found id: ""
	I1210 00:03:31.798320   83900 logs.go:282] 1 containers: [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7]
	I1210 00:03:31.798375   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.802661   83900 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:31.802726   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:31.837861   83900 cri.go:89] found id: "c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:31.837888   83900 cri.go:89] found id: ""
	I1210 00:03:31.837896   83900 logs.go:282] 1 containers: [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9]
	I1210 00:03:31.837943   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.842139   83900 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:31.842224   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:31.876940   83900 cri.go:89] found id: "db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:31.876967   83900 cri.go:89] found id: ""
	I1210 00:03:31.876977   83900 logs.go:282] 1 containers: [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71]
	I1210 00:03:31.877043   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.881416   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:31.881491   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:31.915449   83900 cri.go:89] found id: "f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:31.915481   83900 cri.go:89] found id: ""
	I1210 00:03:31.915491   83900 logs.go:282] 1 containers: [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de]
	I1210 00:03:31.915575   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.919342   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:31.919400   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:31.954554   83900 cri.go:89] found id: "a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:31.954583   83900 cri.go:89] found id: ""
	I1210 00:03:31.954592   83900 logs.go:282] 1 containers: [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994]
	I1210 00:03:31.954641   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:31.958484   83900 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:31.958569   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:32.000361   83900 cri.go:89] found id: "a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:32.000390   83900 cri.go:89] found id: ""
	I1210 00:03:32.000401   83900 logs.go:282] 1 containers: [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538]
	I1210 00:03:32.000462   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:32.004339   83900 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:32.004400   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:32.043231   83900 cri.go:89] found id: ""
	I1210 00:03:32.043254   83900 logs.go:282] 0 containers: []
	W1210 00:03:32.043279   83900 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:32.043285   83900 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 00:03:32.043334   83900 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 00:03:32.079572   83900 cri.go:89] found id: "b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:32.079597   83900 cri.go:89] found id: "e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:32.079608   83900 cri.go:89] found id: ""
	I1210 00:03:32.079615   83900 logs.go:282] 2 containers: [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1]
	I1210 00:03:32.079661   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:32.083526   83900 ssh_runner.go:195] Run: which crictl
	I1210 00:03:32.087101   83900 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:32.087126   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:32.494977   83900 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:32.495021   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:32.509299   83900 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:32.509342   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 00:03:32.612029   83900 logs.go:123] Gathering logs for kube-apiserver [07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7] ...
	I1210 00:03:32.612066   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07b6833b28b166aeb43f13797f2357acd86223c1e09700b3cc9c6b4fd651fec7"
	I1210 00:03:32.656868   83900 logs.go:123] Gathering logs for kube-scheduler [f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de] ...
	I1210 00:03:32.656900   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251f2ec97259db41c6e23c94802746e1c75f296cb9c8294e1aa2963e7c538de"
	I1210 00:03:32.695347   83900 logs.go:123] Gathering logs for storage-provisioner [b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7] ...
	I1210 00:03:32.695376   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794fd5af22492264e07f2f36923941a0f1d1edae654d8b09fce96b9cffd7aa7"
	I1210 00:03:32.732071   83900 logs.go:123] Gathering logs for storage-provisioner [e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1] ...
	I1210 00:03:32.732100   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6a287aaa2bb1de7c6200fa605884760d8095db1153ed892accfad81422db5b1"
	I1210 00:03:32.767692   83900 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:32.767718   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:32.836047   83900 logs.go:123] Gathering logs for etcd [c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9] ...
	I1210 00:03:32.836088   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c641220f93efed1318e3e8394acf00aada59dab0008fb028a29c0e6ccff443b9"
	I1210 00:03:32.887136   83900 logs.go:123] Gathering logs for coredns [db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71] ...
	I1210 00:03:32.887175   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9231487d25e403239193247a5b12b5259235270071ccb3a1db26db3bdaae71"
	I1210 00:03:32.929836   83900 logs.go:123] Gathering logs for kube-proxy [a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994] ...
	I1210 00:03:32.929873   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d14690e81c5ccae0b055ce0cf483bf3ecd9697a91fbe1fb198617088d0994"
	I1210 00:03:32.972459   83900 logs.go:123] Gathering logs for kube-controller-manager [a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538] ...
	I1210 00:03:32.972492   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8a1911851cba96026990f18f56154ba60ccc29c98da3a25587b4bbc46b57538"
	I1210 00:03:33.029387   83900 logs.go:123] Gathering logs for container status ...
	I1210 00:03:33.029415   83900 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:32.904302   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:32.919123   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:32.919209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:32.952847   84547 cri.go:89] found id: ""
	I1210 00:03:32.952879   84547 logs.go:282] 0 containers: []
	W1210 00:03:32.952889   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:32.952897   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:32.952961   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:32.986993   84547 cri.go:89] found id: ""
	I1210 00:03:32.987020   84547 logs.go:282] 0 containers: []
	W1210 00:03:32.987029   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:32.987035   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:32.987085   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:33.027506   84547 cri.go:89] found id: ""
	I1210 00:03:33.027536   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.027548   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:33.027556   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:33.027630   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:33.061553   84547 cri.go:89] found id: ""
	I1210 00:03:33.061588   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.061605   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:33.061613   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:33.061673   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:33.097640   84547 cri.go:89] found id: ""
	I1210 00:03:33.097679   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.097693   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:33.097702   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:33.097783   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:33.131725   84547 cri.go:89] found id: ""
	I1210 00:03:33.131758   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.131768   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:33.131775   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:33.131839   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:33.165731   84547 cri.go:89] found id: ""
	I1210 00:03:33.165759   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.165771   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:33.165779   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:33.165841   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:33.200288   84547 cri.go:89] found id: ""
	I1210 00:03:33.200310   84547 logs.go:282] 0 containers: []
	W1210 00:03:33.200320   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:33.200338   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:33.200351   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:33.251524   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:33.251577   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:33.266197   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:33.266223   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:33.343529   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:33.343583   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:33.343600   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:33.420133   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:33.420175   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:35.577482   83900 system_pods.go:59] 8 kube-system pods found
	I1210 00:03:35.577511   83900 system_pods.go:61] "coredns-7c65d6cfc9-qvtlr" [1ef5707d-5f06-46d0-809b-da79f79c49e5] Running
	I1210 00:03:35.577516   83900 system_pods.go:61] "etcd-embed-certs-825613" [3213929f-0200-4e7d-9b69-967463bfc194] Running
	I1210 00:03:35.577520   83900 system_pods.go:61] "kube-apiserver-embed-certs-825613" [d64b8c55-7da1-4b8e-864e-0727676b17a1] Running
	I1210 00:03:35.577524   83900 system_pods.go:61] "kube-controller-manager-embed-certs-825613" [61372146-3fe1-4f4e-9d8e-d85cc5ac8369] Running
	I1210 00:03:35.577527   83900 system_pods.go:61] "kube-proxy-rn6fg" [6db02558-bfa6-4c5f-a120-aed13575b273] Running
	I1210 00:03:35.577529   83900 system_pods.go:61] "kube-scheduler-embed-certs-825613" [b9c75ecd-add3-4a19-86e0-08326eea0f6b] Running
	I1210 00:03:35.577537   83900 system_pods.go:61] "metrics-server-6867b74b74-hg7c5" [2a657b1b-4435-42b5-aef2-deebf7865c83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:03:35.577542   83900 system_pods.go:61] "storage-provisioner" [e5cabae9-bb71-4b5e-9a43-0dce4a0733a1] Running
	I1210 00:03:35.577549   83900 system_pods.go:74] duration metric: took 3.81354476s to wait for pod list to return data ...
	I1210 00:03:35.577556   83900 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:03:35.579951   83900 default_sa.go:45] found service account: "default"
	I1210 00:03:35.579971   83900 default_sa.go:55] duration metric: took 2.410307ms for default service account to be created ...
	I1210 00:03:35.579977   83900 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:03:35.584102   83900 system_pods.go:86] 8 kube-system pods found
	I1210 00:03:35.584126   83900 system_pods.go:89] "coredns-7c65d6cfc9-qvtlr" [1ef5707d-5f06-46d0-809b-da79f79c49e5] Running
	I1210 00:03:35.584131   83900 system_pods.go:89] "etcd-embed-certs-825613" [3213929f-0200-4e7d-9b69-967463bfc194] Running
	I1210 00:03:35.584136   83900 system_pods.go:89] "kube-apiserver-embed-certs-825613" [d64b8c55-7da1-4b8e-864e-0727676b17a1] Running
	I1210 00:03:35.584142   83900 system_pods.go:89] "kube-controller-manager-embed-certs-825613" [61372146-3fe1-4f4e-9d8e-d85cc5ac8369] Running
	I1210 00:03:35.584148   83900 system_pods.go:89] "kube-proxy-rn6fg" [6db02558-bfa6-4c5f-a120-aed13575b273] Running
	I1210 00:03:35.584153   83900 system_pods.go:89] "kube-scheduler-embed-certs-825613" [b9c75ecd-add3-4a19-86e0-08326eea0f6b] Running
	I1210 00:03:35.584163   83900 system_pods.go:89] "metrics-server-6867b74b74-hg7c5" [2a657b1b-4435-42b5-aef2-deebf7865c83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:03:35.584168   83900 system_pods.go:89] "storage-provisioner" [e5cabae9-bb71-4b5e-9a43-0dce4a0733a1] Running
	I1210 00:03:35.584181   83900 system_pods.go:126] duration metric: took 4.196356ms to wait for k8s-apps to be running ...
	I1210 00:03:35.584192   83900 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:03:35.584235   83900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:03:35.598192   83900 system_svc.go:56] duration metric: took 13.993491ms WaitForService to wait for kubelet
	I1210 00:03:35.598218   83900 kubeadm.go:582] duration metric: took 4m25.529459505s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:03:35.598238   83900 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:03:35.600992   83900 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:03:35.601012   83900 node_conditions.go:123] node cpu capacity is 2
	I1210 00:03:35.601025   83900 node_conditions.go:105] duration metric: took 2.782415ms to run NodePressure ...
	I1210 00:03:35.601038   83900 start.go:241] waiting for startup goroutines ...
	I1210 00:03:35.601049   83900 start.go:246] waiting for cluster config update ...
	I1210 00:03:35.601063   83900 start.go:255] writing updated cluster config ...
	I1210 00:03:35.601360   83900 ssh_runner.go:195] Run: rm -f paused
	I1210 00:03:35.647487   83900 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:03:35.650508   83900 out.go:177] * Done! kubectl is now configured to use "embed-certs-825613" cluster and "default" namespace by default
	I1210 00:03:34.199682   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:36.696731   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:33.258255   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:35.756802   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:35.971411   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:35.988993   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:35.989059   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:36.022639   84547 cri.go:89] found id: ""
	I1210 00:03:36.022673   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.022684   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:36.022692   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:36.022758   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:36.055421   84547 cri.go:89] found id: ""
	I1210 00:03:36.055452   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.055461   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:36.055466   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:36.055514   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:36.087690   84547 cri.go:89] found id: ""
	I1210 00:03:36.087721   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.087731   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:36.087738   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:36.087802   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:36.125209   84547 cri.go:89] found id: ""
	I1210 00:03:36.125240   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.125249   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:36.125254   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:36.125304   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:36.157544   84547 cri.go:89] found id: ""
	I1210 00:03:36.157586   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.157599   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:36.157607   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:36.157676   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:36.193415   84547 cri.go:89] found id: ""
	I1210 00:03:36.193448   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.193459   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:36.193466   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:36.193525   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:36.228941   84547 cri.go:89] found id: ""
	I1210 00:03:36.228971   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.228982   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:36.228989   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:36.229052   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:36.264821   84547 cri.go:89] found id: ""
	I1210 00:03:36.264851   84547 logs.go:282] 0 containers: []
	W1210 00:03:36.264862   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:36.264873   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:36.264887   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:36.314841   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:36.314881   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:36.328664   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:36.328695   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:36.402929   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:36.402956   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:36.402970   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:36.480270   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:36.480306   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:39.022748   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:39.036323   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:39.036382   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:39.070582   84547 cri.go:89] found id: ""
	I1210 00:03:39.070608   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.070616   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:39.070622   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:39.070690   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:39.108783   84547 cri.go:89] found id: ""
	I1210 00:03:39.108814   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.108825   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:39.108832   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:39.108889   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:39.142300   84547 cri.go:89] found id: ""
	I1210 00:03:39.142330   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.142338   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:39.142344   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:39.142400   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:39.177859   84547 cri.go:89] found id: ""
	I1210 00:03:39.177891   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.177903   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:39.177911   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:39.177965   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:39.212109   84547 cri.go:89] found id: ""
	I1210 00:03:39.212140   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.212152   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:39.212160   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:39.212209   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:39.245451   84547 cri.go:89] found id: ""
	I1210 00:03:39.245490   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.245501   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:39.245509   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:39.245569   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:39.278925   84547 cri.go:89] found id: ""
	I1210 00:03:39.278957   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.278967   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:39.278974   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:39.279063   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:39.312322   84547 cri.go:89] found id: ""
	I1210 00:03:39.312350   84547 logs.go:282] 0 containers: []
	W1210 00:03:39.312358   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:39.312367   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:39.312377   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:39.362844   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:39.362882   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:39.375938   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:39.375967   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:39.443649   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:39.443676   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:39.443691   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:39.523722   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:39.523757   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:38.698105   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:41.198814   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:38.256567   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:40.257434   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:42.062317   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:42.075094   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:03:42.075157   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:03:42.106719   84547 cri.go:89] found id: ""
	I1210 00:03:42.106744   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.106751   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:03:42.106757   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:03:42.106805   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:03:42.141977   84547 cri.go:89] found id: ""
	I1210 00:03:42.142011   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.142022   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:03:42.142029   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:03:42.142089   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:03:42.175117   84547 cri.go:89] found id: ""
	I1210 00:03:42.175151   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.175164   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:03:42.175172   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:03:42.175243   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:03:42.209871   84547 cri.go:89] found id: ""
	I1210 00:03:42.209900   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.209911   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:03:42.209919   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:03:42.209982   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:03:42.252784   84547 cri.go:89] found id: ""
	I1210 00:03:42.252812   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.252822   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:03:42.252830   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:03:42.252891   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:03:42.285934   84547 cri.go:89] found id: ""
	I1210 00:03:42.285961   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.285973   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:03:42.285980   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:03:42.286040   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:03:42.327393   84547 cri.go:89] found id: ""
	I1210 00:03:42.327423   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.327433   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:03:42.327440   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:03:42.327498   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:03:42.362244   84547 cri.go:89] found id: ""
	I1210 00:03:42.362279   84547 logs.go:282] 0 containers: []
	W1210 00:03:42.362289   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:03:42.362302   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:03:42.362317   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:03:42.441656   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:03:42.441691   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:03:42.479357   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:03:42.479397   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:03:42.533002   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:03:42.533038   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:03:42.546485   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:03:42.546514   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:03:42.616912   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:03:45.117156   84547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:03:45.131265   84547 kubeadm.go:597] duration metric: took 4m2.157458218s to restartPrimaryControlPlane
	W1210 00:03:45.131350   84547 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 00:03:45.131379   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:03:43.699026   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:46.197420   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:42.756686   84259 pod_ready.go:103] pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:44.251077   84259 pod_ready.go:82] duration metric: took 4m0.000996899s for pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace to be "Ready" ...
	E1210 00:03:44.251112   84259 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-lgzdz" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 00:03:44.251136   84259 pod_ready.go:39] duration metric: took 4m13.15350289s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:03:44.251170   84259 kubeadm.go:597] duration metric: took 4m23.227405415s to restartPrimaryControlPlane
	W1210 00:03:44.251238   84259 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 00:03:44.251368   84259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:03:46.778025   84547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.646620044s)
	I1210 00:03:46.778099   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:03:46.792384   84547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:03:46.802897   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:03:46.813393   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:03:46.813417   84547 kubeadm.go:157] found existing configuration files:
	
	I1210 00:03:46.813473   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:03:46.823016   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:03:46.823093   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:03:46.832912   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:03:46.841961   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:03:46.842019   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:03:46.851160   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:03:46.859879   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:03:46.859938   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:03:46.869192   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:03:46.878423   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:03:46.878487   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:03:46.888103   84547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:03:47.105146   84547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:03:48.698211   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:51.197463   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:53.198578   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:55.697251   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:03:57.698289   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:00.198291   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:02.696926   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:04.697431   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:06.698260   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:10.270952   84259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.019545181s)
	I1210 00:04:10.271025   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:04:10.292112   84259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:04:10.307538   84259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:04:10.325375   84259 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:04:10.325406   84259 kubeadm.go:157] found existing configuration files:
	
	I1210 00:04:10.325465   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 00:04:10.338892   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:04:10.338960   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:04:10.353888   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 00:04:10.364787   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:04:10.364852   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:04:10.374513   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 00:04:10.393430   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:04:10.393486   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:04:10.403621   84259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 00:04:10.412759   84259 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:04:10.412824   84259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:04:10.422153   84259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:04:10.468789   84259 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 00:04:10.468852   84259 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:04:10.566712   84259 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:04:10.566840   84259 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:04:10.566939   84259 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 00:04:10.574253   84259 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:04:10.576376   84259 out.go:235]   - Generating certificates and keys ...
	I1210 00:04:10.576485   84259 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:04:10.576566   84259 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:04:10.576722   84259 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:04:10.576816   84259 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:04:10.576915   84259 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:04:10.576991   84259 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:04:10.577107   84259 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:04:10.577214   84259 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:04:10.577319   84259 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:04:10.577433   84259 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:04:10.577522   84259 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:04:10.577616   84259 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:04:10.870887   84259 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:04:10.977055   84259 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 00:04:11.160320   84259 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:04:11.207864   84259 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:04:11.315037   84259 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:04:11.315715   84259 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:04:11.319135   84259 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:04:09.197254   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:11.197831   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:11.321063   84259 out.go:235]   - Booting up control plane ...
	I1210 00:04:11.321193   84259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:04:11.321296   84259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:04:11.322134   84259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:04:11.341567   84259 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:04:11.347658   84259 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:04:11.347736   84259 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:04:11.492127   84259 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 00:04:11.492293   84259 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 00:04:11.994410   84259 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.01471ms
	I1210 00:04:11.994533   84259 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 00:04:13.697731   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:16.198771   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:16.998638   84259 kubeadm.go:310] [api-check] The API server is healthy after 5.002934845s
	I1210 00:04:17.009930   84259 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 00:04:17.025483   84259 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 00:04:17.059445   84259 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 00:04:17.059762   84259 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-871210 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 00:04:17.071665   84259 kubeadm.go:310] [bootstrap-token] Using token: 1x1r34.gs3p33sqgju9dylj
	I1210 00:04:17.073936   84259 out.go:235]   - Configuring RBAC rules ...
	I1210 00:04:17.074055   84259 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 00:04:17.079920   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 00:04:17.090408   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 00:04:17.094072   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 00:04:17.096951   84259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 00:04:17.099929   84259 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 00:04:17.404431   84259 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 00:04:17.837125   84259 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 00:04:18.404721   84259 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 00:04:18.405661   84259 kubeadm.go:310] 
	I1210 00:04:18.405757   84259 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 00:04:18.405770   84259 kubeadm.go:310] 
	I1210 00:04:18.405871   84259 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 00:04:18.405883   84259 kubeadm.go:310] 
	I1210 00:04:18.405916   84259 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 00:04:18.406012   84259 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 00:04:18.406101   84259 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 00:04:18.406111   84259 kubeadm.go:310] 
	I1210 00:04:18.406197   84259 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 00:04:18.406209   84259 kubeadm.go:310] 
	I1210 00:04:18.406318   84259 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 00:04:18.406329   84259 kubeadm.go:310] 
	I1210 00:04:18.406412   84259 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 00:04:18.406482   84259 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 00:04:18.406544   84259 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 00:04:18.406550   84259 kubeadm.go:310] 
	I1210 00:04:18.406643   84259 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 00:04:18.406787   84259 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 00:04:18.406802   84259 kubeadm.go:310] 
	I1210 00:04:18.406919   84259 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 1x1r34.gs3p33sqgju9dylj \
	I1210 00:04:18.407089   84259 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b \
	I1210 00:04:18.407117   84259 kubeadm.go:310] 	--control-plane 
	I1210 00:04:18.407123   84259 kubeadm.go:310] 
	I1210 00:04:18.407207   84259 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 00:04:18.407214   84259 kubeadm.go:310] 
	I1210 00:04:18.407300   84259 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 1x1r34.gs3p33sqgju9dylj \
	I1210 00:04:18.407433   84259 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b 
	I1210 00:04:18.408197   84259 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:04:18.408272   84259 cni.go:84] Creating CNI manager for ""
	I1210 00:04:18.408289   84259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:04:18.410841   84259 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:04:18.697285   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:20.698266   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:18.412209   84259 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:04:18.422123   84259 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:04:18.440972   84259 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:04:18.441058   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:18.441138   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-871210 minikube.k8s.io/updated_at=2024_12_10T00_04_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=default-k8s-diff-port-871210 minikube.k8s.io/primary=true
	I1210 00:04:18.462185   84259 ops.go:34] apiserver oom_adj: -16
	I1210 00:04:18.625855   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:19.126760   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:19.626829   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:20.126663   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:20.626893   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:21.126851   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:21.625892   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:22.126904   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:22.626450   84259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:04:22.715909   84259 kubeadm.go:1113] duration metric: took 4.274924102s to wait for elevateKubeSystemPrivileges
	I1210 00:04:22.715958   84259 kubeadm.go:394] duration metric: took 5m1.740971462s to StartCluster
	I1210 00:04:22.715982   84259 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:04:22.716080   84259 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1210 00:04:22.718538   84259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:04:22.718890   84259 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.54 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:04:22.718958   84259 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:04:22.719059   84259 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-871210"
	I1210 00:04:22.719079   84259 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-871210"
	W1210 00:04:22.719091   84259 addons.go:243] addon storage-provisioner should already be in state true
	I1210 00:04:22.719100   84259 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-871210"
	I1210 00:04:22.719123   84259 host.go:66] Checking if "default-k8s-diff-port-871210" exists ...
	I1210 00:04:22.719120   84259 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-871210"
	I1210 00:04:22.719169   84259 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-871210"
	W1210 00:04:22.719184   84259 addons.go:243] addon metrics-server should already be in state true
	I1210 00:04:22.719216   84259 host.go:66] Checking if "default-k8s-diff-port-871210" exists ...
	I1210 00:04:22.719133   84259 config.go:182] Loaded profile config "default-k8s-diff-port-871210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:04:22.719134   84259 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-871210"
	I1210 00:04:22.719582   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.719631   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.719717   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.719728   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.719753   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.719763   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.720519   84259 out.go:177] * Verifying Kubernetes components...
	I1210 00:04:22.722140   84259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:04:22.735592   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44099
	I1210 00:04:22.735716   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36311
	I1210 00:04:22.736075   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.736100   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.736605   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.736626   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.736605   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.736642   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.736995   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.737007   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.737217   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.737595   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.737639   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.739278   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41255
	I1210 00:04:22.741315   84259 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-871210"
	W1210 00:04:22.741337   84259 addons.go:243] addon default-storageclass should already be in state true
	I1210 00:04:22.741367   84259 host.go:66] Checking if "default-k8s-diff-port-871210" exists ...
	I1210 00:04:22.741739   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.741778   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.742259   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.742829   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.742858   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.743182   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.743632   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.743663   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.754879   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1210 00:04:22.755310   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.756015   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.756032   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.756397   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.756576   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.758535   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1210 00:04:22.759514   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33065
	I1210 00:04:22.759891   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.759925   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39821
	I1210 00:04:22.760309   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.760330   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.760346   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.760435   84259 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 00:04:22.760689   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.760831   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.760845   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.760860   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.761166   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.761589   84259 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:04:22.761627   84259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:04:22.761737   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 00:04:22.761754   84259 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 00:04:22.761774   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1210 00:04:22.762882   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1210 00:04:22.764440   84259 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:04:22.764810   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.765316   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1210 00:04:22.765339   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.765474   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1210 00:04:22.765675   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1210 00:04:22.765828   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1210 00:04:22.765894   84259 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:04:22.765912   84259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:04:22.765919   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1210 00:04:22.765934   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1210 00:04:22.768979   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.769446   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1210 00:04:22.769498   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.769626   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1210 00:04:22.769832   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1210 00:04:22.769956   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1210 00:04:22.770078   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1210 00:04:22.780995   84259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45841
	I1210 00:04:22.781451   84259 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:04:22.782077   84259 main.go:141] libmachine: Using API Version  1
	I1210 00:04:22.782098   84259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:04:22.782467   84259 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:04:22.782705   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetState
	I1210 00:04:22.784771   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .DriverName
	I1210 00:04:22.785015   84259 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:04:22.785030   84259 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:04:22.785047   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHHostname
	I1210 00:04:22.788208   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.788659   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:5b:a3", ip: ""} in network mk-default-k8s-diff-port-871210: {Iface:virbr4 ExpiryTime:2024-12-10 00:59:06 +0000 UTC Type:0 Mac:52:54:00:5e:5b:a3 Iaid: IPaddr:192.168.72.54 Prefix:24 Hostname:default-k8s-diff-port-871210 Clientid:01:52:54:00:5e:5b:a3}
	I1210 00:04:22.788690   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | domain default-k8s-diff-port-871210 has defined IP address 192.168.72.54 and MAC address 52:54:00:5e:5b:a3 in network mk-default-k8s-diff-port-871210
	I1210 00:04:22.788771   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHPort
	I1210 00:04:22.789148   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHKeyPath
	I1210 00:04:22.789275   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .GetSSHUsername
	I1210 00:04:22.789386   84259 sshutil.go:53] new ssh client: &{IP:192.168.72.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/default-k8s-diff-port-871210/id_rsa Username:docker}
	I1210 00:04:22.926076   84259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:04:22.945059   84259 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-871210" to be "Ready" ...
	I1210 00:04:22.970684   84259 node_ready.go:49] node "default-k8s-diff-port-871210" has status "Ready":"True"
	I1210 00:04:22.970727   84259 node_ready.go:38] duration metric: took 25.618738ms for node "default-k8s-diff-port-871210" to be "Ready" ...
	I1210 00:04:22.970740   84259 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:04:22.989661   84259 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:23.045411   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 00:04:23.045433   84259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 00:04:23.074907   84259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:04:23.076857   84259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:04:23.094809   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 00:04:23.094836   84259 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 00:04:23.125107   84259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:04:23.125136   84259 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 00:04:23.174286   84259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:04:23.554521   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554556   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.554568   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554589   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.554855   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.554871   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.554880   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554882   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.554888   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.554893   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.554891   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.554903   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.554913   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.555087   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.555099   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.555283   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.555354   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.555379   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.579741   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.579775   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.580075   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.580091   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.580113   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.776674   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.776701   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.777020   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) DBG | Closing plugin on server side
	I1210 00:04:23.777060   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.777068   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.777076   84259 main.go:141] libmachine: Making call to close driver server
	I1210 00:04:23.777089   84259 main.go:141] libmachine: (default-k8s-diff-port-871210) Calling .Close
	I1210 00:04:23.777314   84259 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:04:23.777330   84259 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:04:23.777347   84259 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-871210"
	I1210 00:04:23.778992   84259 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 00:04:23.198263   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:25.198299   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:23.780354   84259 addons.go:510] duration metric: took 1.061403814s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 00:04:25.012490   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:27.495987   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:27.697345   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:29.698123   83859 pod_ready.go:103] pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:31.692183   83859 pod_ready.go:82] duration metric: took 4m0.000797786s for pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace to be "Ready" ...
	E1210 00:04:31.692211   83859 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-sd58c" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 00:04:31.692228   83859 pod_ready.go:39] duration metric: took 4m11.541153015s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:04:31.692258   83859 kubeadm.go:597] duration metric: took 4m19.334550967s to restartPrimaryControlPlane
	W1210 00:04:31.692305   83859 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 00:04:31.692329   83859 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:04:29.995452   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:31.997927   84259 pod_ready.go:103] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:04:34.496689   84259 pod_ready.go:93] pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.496716   84259 pod_ready.go:82] duration metric: took 11.507027662s for pod "coredns-7c65d6cfc9-7xpcc" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.496730   84259 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.501166   84259 pod_ready.go:93] pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.501188   84259 pod_ready.go:82] duration metric: took 4.45016ms for pod "etcd-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.501198   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.506061   84259 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.506085   84259 pod_ready.go:82] duration metric: took 4.881919ms for pod "kube-apiserver-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.506096   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.510401   84259 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.510422   84259 pod_ready.go:82] duration metric: took 4.320761ms for pod "kube-controller-manager-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.510432   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pj85d" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.514319   84259 pod_ready.go:93] pod "kube-proxy-pj85d" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.514340   84259 pod_ready.go:82] duration metric: took 3.902541ms for pod "kube-proxy-pj85d" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.514348   84259 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.896369   84259 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace has status "Ready":"True"
	I1210 00:04:34.896393   84259 pod_ready.go:82] duration metric: took 382.038726ms for pod "kube-scheduler-default-k8s-diff-port-871210" in "kube-system" namespace to be "Ready" ...
	I1210 00:04:34.896401   84259 pod_ready.go:39] duration metric: took 11.925650242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:04:34.896415   84259 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:04:34.896466   84259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:04:34.910589   84259 api_server.go:72] duration metric: took 12.191654524s to wait for apiserver process to appear ...
	I1210 00:04:34.910617   84259 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:04:34.910639   84259 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8444/healthz ...
	I1210 00:04:34.916503   84259 api_server.go:279] https://192.168.72.54:8444/healthz returned 200:
	ok
	I1210 00:04:34.917955   84259 api_server.go:141] control plane version: v1.31.2
	I1210 00:04:34.917979   84259 api_server.go:131] duration metric: took 7.355946ms to wait for apiserver health ...
	I1210 00:04:34.917987   84259 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:04:35.098220   84259 system_pods.go:59] 9 kube-system pods found
	I1210 00:04:35.098252   84259 system_pods.go:61] "coredns-7c65d6cfc9-7xpcc" [6cff7e56-1785-41c8-bd9c-db9d3f0bd05f] Running
	I1210 00:04:35.098259   84259 system_pods.go:61] "coredns-7c65d6cfc9-z2n25" [b6b81952-7281-4705-9536-06eb939a5807] Running
	I1210 00:04:35.098265   84259 system_pods.go:61] "etcd-default-k8s-diff-port-871210" [e9c747a1-72c0-4b30-b401-c39a41fe5eb5] Running
	I1210 00:04:35.098271   84259 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-871210" [1a27b619-b576-4a36-b88a-0c498ad38628] Running
	I1210 00:04:35.098276   84259 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-871210" [d22a69b7-3389-46a9-9ca5-a3101aa34e05] Running
	I1210 00:04:35.098280   84259 system_pods.go:61] "kube-proxy-pj85d" [d1b9b056-f4a3-419c-86fa-a94d88464f74] Running
	I1210 00:04:35.098285   84259 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-871210" [50a03a5c-d0e5-454a-9a7b-49963ff59d06] Running
	I1210 00:04:35.098294   84259 system_pods.go:61] "metrics-server-6867b74b74-7g2qm" [49ac129a-c85d-4af1-b3b2-06bc10bced77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:04:35.098298   84259 system_pods.go:61] "storage-provisioner" [ea716edd-4030-4ec3-b094-c3a50154b473] Running
	I1210 00:04:35.098309   84259 system_pods.go:74] duration metric: took 180.315426ms to wait for pod list to return data ...
	I1210 00:04:35.098322   84259 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:04:35.294342   84259 default_sa.go:45] found service account: "default"
	I1210 00:04:35.294367   84259 default_sa.go:55] duration metric: took 196.039183ms for default service account to be created ...
	I1210 00:04:35.294376   84259 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:04:35.497331   84259 system_pods.go:86] 9 kube-system pods found
	I1210 00:04:35.497364   84259 system_pods.go:89] "coredns-7c65d6cfc9-7xpcc" [6cff7e56-1785-41c8-bd9c-db9d3f0bd05f] Running
	I1210 00:04:35.497369   84259 system_pods.go:89] "coredns-7c65d6cfc9-z2n25" [b6b81952-7281-4705-9536-06eb939a5807] Running
	I1210 00:04:35.497373   84259 system_pods.go:89] "etcd-default-k8s-diff-port-871210" [e9c747a1-72c0-4b30-b401-c39a41fe5eb5] Running
	I1210 00:04:35.497377   84259 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-871210" [1a27b619-b576-4a36-b88a-0c498ad38628] Running
	I1210 00:04:35.497381   84259 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-871210" [d22a69b7-3389-46a9-9ca5-a3101aa34e05] Running
	I1210 00:04:35.497384   84259 system_pods.go:89] "kube-proxy-pj85d" [d1b9b056-f4a3-419c-86fa-a94d88464f74] Running
	I1210 00:04:35.497387   84259 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-871210" [50a03a5c-d0e5-454a-9a7b-49963ff59d06] Running
	I1210 00:04:35.497396   84259 system_pods.go:89] "metrics-server-6867b74b74-7g2qm" [49ac129a-c85d-4af1-b3b2-06bc10bced77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:04:35.497400   84259 system_pods.go:89] "storage-provisioner" [ea716edd-4030-4ec3-b094-c3a50154b473] Running
	I1210 00:04:35.497409   84259 system_pods.go:126] duration metric: took 203.02694ms to wait for k8s-apps to be running ...
	I1210 00:04:35.497416   84259 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:04:35.497456   84259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:04:35.512217   84259 system_svc.go:56] duration metric: took 14.792056ms WaitForService to wait for kubelet
	I1210 00:04:35.512248   84259 kubeadm.go:582] duration metric: took 12.793318604s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:04:35.512274   84259 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:04:35.695292   84259 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:04:35.695318   84259 node_conditions.go:123] node cpu capacity is 2
	I1210 00:04:35.695329   84259 node_conditions.go:105] duration metric: took 183.048181ms to run NodePressure ...
	I1210 00:04:35.695341   84259 start.go:241] waiting for startup goroutines ...
	I1210 00:04:35.695348   84259 start.go:246] waiting for cluster config update ...
	I1210 00:04:35.695361   84259 start.go:255] writing updated cluster config ...
	I1210 00:04:35.695666   84259 ssh_runner.go:195] Run: rm -f paused
	I1210 00:04:35.742539   84259 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:04:35.744394   84259 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-871210" cluster and "default" namespace by default
	I1210 00:04:57.851348   83859 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.159001958s)
	I1210 00:04:57.851413   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:04:57.866601   83859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:04:57.876643   83859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:04:57.886172   83859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:04:57.886200   83859 kubeadm.go:157] found existing configuration files:
	
	I1210 00:04:57.886252   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:04:57.895643   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:04:57.895722   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:04:57.905397   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:04:57.914236   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:04:57.914299   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:04:57.923225   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:04:57.932422   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:04:57.932476   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:04:57.942840   83859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:04:57.952087   83859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:04:57.952159   83859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:04:57.961371   83859 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:04:58.005314   83859 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 00:04:58.005444   83859 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:04:58.098287   83859 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:04:58.098431   83859 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:04:58.098591   83859 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 00:04:58.106525   83859 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:04:58.109142   83859 out.go:235]   - Generating certificates and keys ...
	I1210 00:04:58.109219   83859 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:04:58.109320   83859 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:04:58.109456   83859 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:04:58.109536   83859 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:04:58.109637   83859 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:04:58.109728   83859 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:04:58.109840   83859 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:04:58.109940   83859 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:04:58.110047   83859 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:04:58.110152   83859 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:04:58.110225   83859 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:04:58.110296   83859 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:04:58.357649   83859 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:04:58.505840   83859 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 00:04:58.890560   83859 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:04:58.965928   83859 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:04:59.341665   83859 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:04:59.342240   83859 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:04:59.344644   83859 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:04:59.346225   83859 out.go:235]   - Booting up control plane ...
	I1210 00:04:59.346353   83859 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:04:59.348071   83859 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:04:59.348893   83859 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:04:59.366824   83859 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:04:59.372906   83859 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:04:59.372962   83859 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:04:59.509554   83859 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 00:04:59.509695   83859 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 00:05:00.014472   83859 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.085309ms
	I1210 00:05:00.014639   83859 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 00:05:05.017162   83859 kubeadm.go:310] [api-check] The API server is healthy after 5.002677832s
	I1210 00:05:05.029078   83859 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 00:05:05.048524   83859 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 00:05:05.080458   83859 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 00:05:05.080735   83859 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-048296 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 00:05:05.091793   83859 kubeadm.go:310] [bootstrap-token] Using token: y5jzuu.af7syybsclcivlzq
	I1210 00:05:05.093253   83859 out.go:235]   - Configuring RBAC rules ...
	I1210 00:05:05.093401   83859 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 00:05:05.098842   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 00:05:05.106341   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 00:05:05.109935   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 00:05:05.114403   83859 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 00:05:05.121436   83859 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 00:05:05.424690   83859 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 00:05:05.851096   83859 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 00:05:06.425724   83859 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 00:05:06.426713   83859 kubeadm.go:310] 
	I1210 00:05:06.426785   83859 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 00:05:06.426808   83859 kubeadm.go:310] 
	I1210 00:05:06.426904   83859 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 00:05:06.426936   83859 kubeadm.go:310] 
	I1210 00:05:06.426981   83859 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 00:05:06.427061   83859 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 00:05:06.427110   83859 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 00:05:06.427120   83859 kubeadm.go:310] 
	I1210 00:05:06.427199   83859 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 00:05:06.427210   83859 kubeadm.go:310] 
	I1210 00:05:06.427282   83859 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 00:05:06.427294   83859 kubeadm.go:310] 
	I1210 00:05:06.427381   83859 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 00:05:06.427486   83859 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 00:05:06.427623   83859 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 00:05:06.427635   83859 kubeadm.go:310] 
	I1210 00:05:06.427757   83859 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 00:05:06.427874   83859 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 00:05:06.427905   83859 kubeadm.go:310] 
	I1210 00:05:06.428032   83859 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y5jzuu.af7syybsclcivlzq \
	I1210 00:05:06.428167   83859 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b \
	I1210 00:05:06.428201   83859 kubeadm.go:310] 	--control-plane 
	I1210 00:05:06.428211   83859 kubeadm.go:310] 
	I1210 00:05:06.428322   83859 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 00:05:06.428332   83859 kubeadm.go:310] 
	I1210 00:05:06.428438   83859 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y5jzuu.af7syybsclcivlzq \
	I1210 00:05:06.428572   83859 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:988b8db589f8480ac81f4918593a60bc028857de125bd39ca70f8ea70fb46e5b 
	I1210 00:05:06.428746   83859 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:05:06.428832   83859 cni.go:84] Creating CNI manager for ""
	I1210 00:05:06.428849   83859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:05:06.431674   83859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:05:06.433006   83859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:05:06.444058   83859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:05:06.462707   83859 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:05:06.462838   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:06.462873   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-048296 minikube.k8s.io/updated_at=2024_12_10T00_05_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5 minikube.k8s.io/name=no-preload-048296 minikube.k8s.io/primary=true
	I1210 00:05:06.493379   83859 ops.go:34] apiserver oom_adj: -16
	I1210 00:05:06.666080   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:07.166762   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:07.666408   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:08.166269   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:08.666734   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:09.166797   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:09.666522   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:10.166230   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:10.667061   83859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:05:10.774149   83859 kubeadm.go:1113] duration metric: took 4.311371383s to wait for elevateKubeSystemPrivileges
	I1210 00:05:10.774188   83859 kubeadm.go:394] duration metric: took 4m58.464009996s to StartCluster
	I1210 00:05:10.774211   83859 settings.go:142] acquiring lock: {Name:mk5afe93d4b75b62f21fb4999a799ab702c984d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:05:10.774290   83859 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1210 00:05:10.777711   83859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-18950/kubeconfig: {Name:mk7978ce6997aed8075f03ea8327af523cc6eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:05:10.778040   83859 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:05:10.778133   83859 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:05:10.778230   83859 addons.go:69] Setting storage-provisioner=true in profile "no-preload-048296"
	I1210 00:05:10.778247   83859 addons.go:234] Setting addon storage-provisioner=true in "no-preload-048296"
	W1210 00:05:10.778255   83859 addons.go:243] addon storage-provisioner should already be in state true
	I1210 00:05:10.778249   83859 addons.go:69] Setting default-storageclass=true in profile "no-preload-048296"
	I1210 00:05:10.778261   83859 addons.go:69] Setting metrics-server=true in profile "no-preload-048296"
	I1210 00:05:10.778276   83859 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-048296"
	I1210 00:05:10.778286   83859 host.go:66] Checking if "no-preload-048296" exists ...
	I1210 00:05:10.778299   83859 addons.go:234] Setting addon metrics-server=true in "no-preload-048296"
	I1210 00:05:10.778339   83859 config.go:182] Loaded profile config "no-preload-048296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1210 00:05:10.778394   83859 addons.go:243] addon metrics-server should already be in state true
	I1210 00:05:10.778446   83859 host.go:66] Checking if "no-preload-048296" exists ...
	I1210 00:05:10.778684   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.778716   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.778746   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.778721   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.778860   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.778897   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.779748   83859 out.go:177] * Verifying Kubernetes components...
	I1210 00:05:10.781467   83859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:05:10.795108   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34465
	I1210 00:05:10.795227   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I1210 00:05:10.795615   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.795709   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.796135   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.796160   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.796221   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.796240   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.796522   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.796539   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.797128   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.797162   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.797200   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.797167   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.797697   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44173
	I1210 00:05:10.798091   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.798587   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.798613   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.798982   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.799324   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.803238   83859 addons.go:234] Setting addon default-storageclass=true in "no-preload-048296"
	W1210 00:05:10.803262   83859 addons.go:243] addon default-storageclass should already be in state true
	I1210 00:05:10.803291   83859 host.go:66] Checking if "no-preload-048296" exists ...
	I1210 00:05:10.803683   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.803722   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.814000   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39657
	I1210 00:05:10.814046   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I1210 00:05:10.814470   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.814686   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.814992   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.815014   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.815427   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.815448   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.815515   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.815748   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.815974   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.816171   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.817989   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1210 00:05:10.818439   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1210 00:05:10.820342   83859 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 00:05:10.820360   83859 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:05:10.821535   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 00:05:10.821556   83859 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 00:05:10.821577   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1210 00:05:10.821652   83859 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:05:10.821673   83859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:05:10.821690   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1210 00:05:10.825763   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826205   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1210 00:05:10.826228   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42881
	I1210 00:05:10.826252   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826275   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1210 00:05:10.826294   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826327   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1210 00:05:10.826228   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.826504   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1210 00:05:10.826563   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1210 00:05:10.826633   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.826711   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1210 00:05:10.826860   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1210 00:05:10.826864   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1210 00:05:10.827029   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1210 00:05:10.827152   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.827173   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.827241   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1210 00:05:10.827691   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.829421   83859 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19888-18950/.minikube/bin/docker-machine-driver-kvm2
	I1210 00:05:10.829471   83859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:10.867902   83859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45189
	I1210 00:05:10.868566   83859 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:10.869138   83859 main.go:141] libmachine: Using API Version  1
	I1210 00:05:10.869169   83859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:10.869575   83859 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:10.869782   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetState
	I1210 00:05:10.871531   83859 main.go:141] libmachine: (no-preload-048296) Calling .DriverName
	I1210 00:05:10.871792   83859 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:05:10.871810   83859 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:05:10.871832   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHHostname
	I1210 00:05:10.874309   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.874822   83859 main.go:141] libmachine: (no-preload-048296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:cf:c7", ip: ""} in network mk-no-preload-048296: {Iface:virbr3 ExpiryTime:2024-12-10 00:59:46 +0000 UTC Type:0 Mac:52:54:00:c6:cf:c7 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:no-preload-048296 Clientid:01:52:54:00:c6:cf:c7}
	I1210 00:05:10.874845   83859 main.go:141] libmachine: (no-preload-048296) DBG | domain no-preload-048296 has defined IP address 192.168.61.182 and MAC address 52:54:00:c6:cf:c7 in network mk-no-preload-048296
	I1210 00:05:10.874999   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHPort
	I1210 00:05:10.875202   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHKeyPath
	I1210 00:05:10.875354   83859 main.go:141] libmachine: (no-preload-048296) Calling .GetSSHUsername
	I1210 00:05:10.875501   83859 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/no-preload-048296/id_rsa Username:docker}
	I1210 00:05:11.009426   83859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:05:11.030237   83859 node_ready.go:35] waiting up to 6m0s for node "no-preload-048296" to be "Ready" ...
	I1210 00:05:11.054344   83859 node_ready.go:49] node "no-preload-048296" has status "Ready":"True"
	I1210 00:05:11.054366   83859 node_ready.go:38] duration metric: took 24.096361ms for node "no-preload-048296" to be "Ready" ...
	I1210 00:05:11.054376   83859 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:05:11.071740   83859 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:11.123723   83859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:05:11.137744   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 00:05:11.137772   83859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 00:05:11.159680   83859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:05:11.179245   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 00:05:11.179267   83859 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 00:05:11.240553   83859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:05:11.240580   83859 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 00:05:11.311813   83859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 00:05:12.041427   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.041463   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.041509   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.041532   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.041886   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.041949   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.041954   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.041963   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.041967   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.041972   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.041932   83859 main.go:141] libmachine: (no-preload-048296) DBG | Closing plugin on server side
	I1210 00:05:12.041986   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.042087   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.042229   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.042245   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.042297   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.042319   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.092616   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.092639   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.092910   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.092923   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.535943   83859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.224081671s)
	I1210 00:05:12.536008   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.536020   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.536456   83859 main.go:141] libmachine: (no-preload-048296) DBG | Closing plugin on server side
	I1210 00:05:12.536459   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.536482   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.536491   83859 main.go:141] libmachine: Making call to close driver server
	I1210 00:05:12.536498   83859 main.go:141] libmachine: (no-preload-048296) Calling .Close
	I1210 00:05:12.536728   83859 main.go:141] libmachine: (no-preload-048296) DBG | Closing plugin on server side
	I1210 00:05:12.536735   83859 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:05:12.536750   83859 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:05:12.536761   83859 addons.go:475] Verifying addon metrics-server=true in "no-preload-048296"
	I1210 00:05:12.538638   83859 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 00:05:12.539910   83859 addons.go:510] duration metric: took 1.761785575s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 00:05:13.078954   83859 pod_ready.go:103] pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace has status "Ready":"False"
	I1210 00:05:13.579737   83859 pod_ready.go:93] pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:13.579760   83859 pod_ready.go:82] duration metric: took 2.507994954s for pod "coredns-7c65d6cfc9-56djc" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:13.579770   83859 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8rxx7" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:14.586682   83859 pod_ready.go:93] pod "coredns-7c65d6cfc9-8rxx7" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:14.586704   83859 pod_ready.go:82] duration metric: took 1.006927767s for pod "coredns-7c65d6cfc9-8rxx7" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:14.586714   83859 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.092318   83859 pod_ready.go:93] pod "etcd-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.092345   83859 pod_ready.go:82] duration metric: took 505.624811ms for pod "etcd-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.092356   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.096802   83859 pod_ready.go:93] pod "kube-apiserver-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.096828   83859 pod_ready.go:82] duration metric: took 4.463914ms for pod "kube-apiserver-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.096839   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.100904   83859 pod_ready.go:93] pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.100923   83859 pod_ready.go:82] duration metric: took 4.07832ms for pod "kube-controller-manager-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.100932   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qklxb" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.176527   83859 pod_ready.go:93] pod "kube-proxy-qklxb" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:15.176552   83859 pod_ready.go:82] duration metric: took 75.613294ms for pod "kube-proxy-qklxb" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:15.176562   83859 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:17.182952   83859 pod_ready.go:103] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"False"
	I1210 00:05:18.181993   83859 pod_ready.go:93] pod "kube-scheduler-no-preload-048296" in "kube-system" namespace has status "Ready":"True"
	I1210 00:05:18.182016   83859 pod_ready.go:82] duration metric: took 3.005447779s for pod "kube-scheduler-no-preload-048296" in "kube-system" namespace to be "Ready" ...
	I1210 00:05:18.182024   83859 pod_ready.go:39] duration metric: took 7.127639413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:05:18.182038   83859 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:05:18.182084   83859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:05:18.196127   83859 api_server.go:72] duration metric: took 7.418043985s to wait for apiserver process to appear ...
	I1210 00:05:18.196155   83859 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:05:18.196176   83859 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I1210 00:05:18.200537   83859 api_server.go:279] https://192.168.61.182:8443/healthz returned 200:
	ok
	I1210 00:05:18.201476   83859 api_server.go:141] control plane version: v1.31.2
	I1210 00:05:18.201501   83859 api_server.go:131] duration metric: took 5.340199ms to wait for apiserver health ...
	I1210 00:05:18.201508   83859 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:05:18.208072   83859 system_pods.go:59] 9 kube-system pods found
	I1210 00:05:18.208105   83859 system_pods.go:61] "coredns-7c65d6cfc9-56djc" [2e25bad9-b88a-4ac8-a180-968bf6b057a2] Running
	I1210 00:05:18.208114   83859 system_pods.go:61] "coredns-7c65d6cfc9-8rxx7" [3fdfd41b-8ef7-41af-a703-ebecfe9ad319] Running
	I1210 00:05:18.208119   83859 system_pods.go:61] "etcd-no-preload-048296" [4447a7d6-df4b-4760-9352-4df0310017ee] Running
	I1210 00:05:18.208125   83859 system_pods.go:61] "kube-apiserver-no-preload-048296" [55a06182-1520-4497-8358-639438eeb297] Running
	I1210 00:05:18.208130   83859 system_pods.go:61] "kube-controller-manager-no-preload-048296" [f30bdb21-7742-46e7-90fe-403679f5e02a] Running
	I1210 00:05:18.208136   83859 system_pods.go:61] "kube-proxy-qklxb" [8bb029a1-abf9-4825-b9ec-0520a78cb3d8] Running
	I1210 00:05:18.208141   83859 system_pods.go:61] "kube-scheduler-no-preload-048296" [019d6d37-c586-4f12-8daf-b60752223cf1] Running
	I1210 00:05:18.208152   83859 system_pods.go:61] "metrics-server-6867b74b74-n2f8c" [8e9f56c9-fd67-4715-9148-1255be17f1fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:05:18.208163   83859 system_pods.go:61] "storage-provisioner" [55fe311f-4610-4805-9fb7-3f1cac7c96e6] Running
	I1210 00:05:18.208176   83859 system_pods.go:74] duration metric: took 6.661729ms to wait for pod list to return data ...
	I1210 00:05:18.208187   83859 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:05:18.375431   83859 default_sa.go:45] found service account: "default"
	I1210 00:05:18.375458   83859 default_sa.go:55] duration metric: took 167.260728ms for default service account to be created ...
	I1210 00:05:18.375467   83859 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:05:18.578148   83859 system_pods.go:86] 9 kube-system pods found
	I1210 00:05:18.578176   83859 system_pods.go:89] "coredns-7c65d6cfc9-56djc" [2e25bad9-b88a-4ac8-a180-968bf6b057a2] Running
	I1210 00:05:18.578183   83859 system_pods.go:89] "coredns-7c65d6cfc9-8rxx7" [3fdfd41b-8ef7-41af-a703-ebecfe9ad319] Running
	I1210 00:05:18.578187   83859 system_pods.go:89] "etcd-no-preload-048296" [4447a7d6-df4b-4760-9352-4df0310017ee] Running
	I1210 00:05:18.578191   83859 system_pods.go:89] "kube-apiserver-no-preload-048296" [55a06182-1520-4497-8358-639438eeb297] Running
	I1210 00:05:18.578195   83859 system_pods.go:89] "kube-controller-manager-no-preload-048296" [f30bdb21-7742-46e7-90fe-403679f5e02a] Running
	I1210 00:05:18.578198   83859 system_pods.go:89] "kube-proxy-qklxb" [8bb029a1-abf9-4825-b9ec-0520a78cb3d8] Running
	I1210 00:05:18.578203   83859 system_pods.go:89] "kube-scheduler-no-preload-048296" [019d6d37-c586-4f12-8daf-b60752223cf1] Running
	I1210 00:05:18.578209   83859 system_pods.go:89] "metrics-server-6867b74b74-n2f8c" [8e9f56c9-fd67-4715-9148-1255be17f1fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 00:05:18.578213   83859 system_pods.go:89] "storage-provisioner" [55fe311f-4610-4805-9fb7-3f1cac7c96e6] Running
	I1210 00:05:18.578223   83859 system_pods.go:126] duration metric: took 202.750724ms to wait for k8s-apps to be running ...
	I1210 00:05:18.578233   83859 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:05:18.578277   83859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:05:18.592635   83859 system_svc.go:56] duration metric: took 14.391582ms WaitForService to wait for kubelet
	I1210 00:05:18.592669   83859 kubeadm.go:582] duration metric: took 7.814589832s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:05:18.592690   83859 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:05:18.776611   83859 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:05:18.776632   83859 node_conditions.go:123] node cpu capacity is 2
	I1210 00:05:18.776642   83859 node_conditions.go:105] duration metric: took 183.94737ms to run NodePressure ...
	I1210 00:05:18.776654   83859 start.go:241] waiting for startup goroutines ...
	I1210 00:05:18.776660   83859 start.go:246] waiting for cluster config update ...
	I1210 00:05:18.776672   83859 start.go:255] writing updated cluster config ...
	I1210 00:05:18.776944   83859 ssh_runner.go:195] Run: rm -f paused
	I1210 00:05:18.826550   83859 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:05:18.828405   83859 out.go:177] * Done! kubectl is now configured to use "no-preload-048296" cluster and "default" namespace by default
	I1210 00:05:43.088214   84547 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 00:05:43.088352   84547 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 00:05:43.089912   84547 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 00:05:43.089988   84547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:05:43.090054   84547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:05:43.090140   84547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:05:43.090225   84547 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 00:05:43.090302   84547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:05:43.092050   84547 out.go:235]   - Generating certificates and keys ...
	I1210 00:05:43.092141   84547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:05:43.092210   84547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:05:43.092305   84547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:05:43.092392   84547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:05:43.092493   84547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:05:43.092569   84547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:05:43.092680   84547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:05:43.092761   84547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:05:43.092865   84547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:05:43.092975   84547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:05:43.093045   84547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:05:43.093143   84547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:05:43.093188   84547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:05:43.093236   84547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:05:43.093317   84547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:05:43.093402   84547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:05:43.093561   84547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:05:43.093728   84547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:05:43.093785   84547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:05:43.093855   84547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:05:43.095285   84547 out.go:235]   - Booting up control plane ...
	I1210 00:05:43.095396   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:05:43.095469   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:05:43.095525   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:05:43.095630   84547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:05:43.095804   84547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 00:05:43.095873   84547 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 00:05:43.095960   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096155   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096237   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096414   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096486   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096679   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096741   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.096904   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.096969   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:05:43.097122   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:05:43.097129   84547 kubeadm.go:310] 
	I1210 00:05:43.097167   84547 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 00:05:43.097202   84547 kubeadm.go:310] 		timed out waiting for the condition
	I1210 00:05:43.097211   84547 kubeadm.go:310] 
	I1210 00:05:43.097251   84547 kubeadm.go:310] 	This error is likely caused by:
	I1210 00:05:43.097280   84547 kubeadm.go:310] 		- The kubelet is not running
	I1210 00:05:43.097373   84547 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 00:05:43.097379   84547 kubeadm.go:310] 
	I1210 00:05:43.097465   84547 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 00:05:43.097495   84547 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 00:05:43.097522   84547 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 00:05:43.097531   84547 kubeadm.go:310] 
	I1210 00:05:43.097619   84547 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 00:05:43.097688   84547 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 00:05:43.097694   84547 kubeadm.go:310] 
	I1210 00:05:43.097794   84547 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 00:05:43.097875   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 00:05:43.097941   84547 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 00:05:43.098038   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 00:05:43.098124   84547 kubeadm.go:310] 
	W1210 00:05:43.098179   84547 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 00:05:43.098224   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:05:48.532537   84547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.434287494s)
	I1210 00:05:48.532615   84547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:05:48.546394   84547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:05:48.555650   84547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:05:48.555669   84547 kubeadm.go:157] found existing configuration files:
	
	I1210 00:05:48.555721   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:05:48.565301   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:05:48.565368   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:05:48.575330   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:05:48.584238   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:05:48.584324   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:05:48.593395   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:05:48.602150   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:05:48.602209   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:05:48.611177   84547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:05:48.619837   84547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:05:48.619907   84547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:05:48.629126   84547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:05:48.827680   84547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:07:44.857816   84547 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 00:07:44.857930   84547 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 00:07:44.859490   84547 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 00:07:44.859542   84547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:07:44.859627   84547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:07:44.859758   84547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:07:44.859894   84547 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 00:07:44.859953   84547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:07:44.861538   84547 out.go:235]   - Generating certificates and keys ...
	I1210 00:07:44.861657   84547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:07:44.861736   84547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:07:44.861809   84547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:07:44.861861   84547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:07:44.861920   84547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:07:44.861969   84547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:07:44.862024   84547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:07:44.862077   84547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:07:44.862143   84547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:07:44.862212   84547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:07:44.862246   84547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:07:44.862293   84547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:07:44.862343   84547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:07:44.862408   84547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:07:44.862505   84547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:07:44.862615   84547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:07:44.862765   84547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:07:44.862897   84547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:07:44.862955   84547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:07:44.863059   84547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:07:44.865485   84547 out.go:235]   - Booting up control plane ...
	I1210 00:07:44.865595   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:07:44.865720   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:07:44.865815   84547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:07:44.865929   84547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:07:44.866136   84547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 00:07:44.866209   84547 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 00:07:44.866332   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866517   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.866586   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866752   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.866818   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.866976   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867042   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.867210   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867323   84547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:07:44.867512   84547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:07:44.867522   84547 kubeadm.go:310] 
	I1210 00:07:44.867611   84547 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 00:07:44.867671   84547 kubeadm.go:310] 		timed out waiting for the condition
	I1210 00:07:44.867691   84547 kubeadm.go:310] 
	I1210 00:07:44.867729   84547 kubeadm.go:310] 	This error is likely caused by:
	I1210 00:07:44.867766   84547 kubeadm.go:310] 		- The kubelet is not running
	I1210 00:07:44.867880   84547 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 00:07:44.867895   84547 kubeadm.go:310] 
	I1210 00:07:44.868055   84547 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 00:07:44.868105   84547 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 00:07:44.868138   84547 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 00:07:44.868146   84547 kubeadm.go:310] 
	I1210 00:07:44.868250   84547 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 00:07:44.868354   84547 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 00:07:44.868362   84547 kubeadm.go:310] 
	I1210 00:07:44.868464   84547 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 00:07:44.868540   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 00:07:44.868606   84547 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 00:07:44.868689   84547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 00:07:44.868741   84547 kubeadm.go:310] 
	I1210 00:07:44.868762   84547 kubeadm.go:394] duration metric: took 8m1.943355039s to StartCluster
	I1210 00:07:44.868799   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:07:44.868852   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:07:44.906641   84547 cri.go:89] found id: ""
	I1210 00:07:44.906667   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.906675   84547 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:07:44.906681   84547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:07:44.906734   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:07:44.942832   84547 cri.go:89] found id: ""
	I1210 00:07:44.942863   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.942872   84547 logs.go:284] No container was found matching "etcd"
	I1210 00:07:44.942881   84547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:07:44.942945   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:07:44.978010   84547 cri.go:89] found id: ""
	I1210 00:07:44.978034   84547 logs.go:282] 0 containers: []
	W1210 00:07:44.978042   84547 logs.go:284] No container was found matching "coredns"
	I1210 00:07:44.978047   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:07:44.978108   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:07:45.017066   84547 cri.go:89] found id: ""
	I1210 00:07:45.017089   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.017097   84547 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:07:45.017110   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:07:45.017172   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:07:45.049757   84547 cri.go:89] found id: ""
	I1210 00:07:45.049778   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.049786   84547 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:07:45.049791   84547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:07:45.049842   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:07:45.082754   84547 cri.go:89] found id: ""
	I1210 00:07:45.082789   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.082800   84547 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:07:45.082807   84547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:07:45.082933   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:07:45.117188   84547 cri.go:89] found id: ""
	I1210 00:07:45.117219   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.117227   84547 logs.go:284] No container was found matching "kindnet"
	I1210 00:07:45.117233   84547 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 00:07:45.117302   84547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 00:07:45.153747   84547 cri.go:89] found id: ""
	I1210 00:07:45.153776   84547 logs.go:282] 0 containers: []
	W1210 00:07:45.153785   84547 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 00:07:45.153795   84547 logs.go:123] Gathering logs for kubelet ...
	I1210 00:07:45.153810   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 00:07:45.207625   84547 logs.go:123] Gathering logs for dmesg ...
	I1210 00:07:45.207662   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:07:45.220929   84547 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:07:45.220957   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:07:45.291850   84547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:07:45.291883   84547 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:07:45.291899   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:07:45.397592   84547 logs.go:123] Gathering logs for container status ...
	I1210 00:07:45.397629   84547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 00:07:45.446755   84547 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1210 00:07:45.446816   84547 out.go:270] * 
	W1210 00:07:45.446898   84547 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 00:07:45.446921   84547 out.go:270] * 
	W1210 00:07:45.448145   84547 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 00:07:45.452118   84547 out.go:201] 
	W1210 00:07:45.453335   84547 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 00:07:45.453398   84547 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 00:07:45.453438   84547 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 00:07:45.455704   84547 out.go:201] 
	
	
	==> CRI-O <==
	Dec 10 00:18:58 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:58.984103715Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789938984078449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=046efd20-e338-49c9-85cb-fc5d0d2fc8f7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:18:58 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:58.984682267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=178924f9-83ff-449b-b553-f42f55cc4a22 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:18:58 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:58.984753092Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=178924f9-83ff-449b-b553-f42f55cc4a22 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:18:58 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:58.984789602Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=178924f9-83ff-449b-b553-f42f55cc4a22 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.016251221Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ffbaf00-a76e-4e1f-bf8f-3a4c7cf39181 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.016332542Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ffbaf00-a76e-4e1f-bf8f-3a4c7cf39181 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.017741504Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1c7031fe-0f24-4a1d-946e-885fe4f466df name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.018104382Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789939018083515,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c7031fe-0f24-4a1d-946e-885fe4f466df name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.018660691Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9074091b-1676-4b43-970d-af671d7469b7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.018719150Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9074091b-1676-4b43-970d-af671d7469b7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.018751900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9074091b-1676-4b43-970d-af671d7469b7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.050431877Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92ea749a-9d6a-4961-8f19-482ff701d84a name=/runtime.v1.RuntimeService/Version
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.050517306Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92ea749a-9d6a-4961-8f19-482ff701d84a name=/runtime.v1.RuntimeService/Version
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.051790682Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eed7b960-4a5c-4b24-b946-f55bf948f303 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.052199978Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789939052178185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eed7b960-4a5c-4b24-b946-f55bf948f303 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.052808776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bdc04eb2-4c41-4b75-b9ce-9b2cd62b47b4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.052893897Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bdc04eb2-4c41-4b75-b9ce-9b2cd62b47b4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.052946508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bdc04eb2-4c41-4b75-b9ce-9b2cd62b47b4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.082732003Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06e5224b-8234-42a3-91e4-f86272d8e082 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.082828543Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06e5224b-8234-42a3-91e4-f86272d8e082 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.083880370Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=26d56cf2-4280-4210-9e7a-55d0bbfa6c6f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.084282856Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789939084258738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26d56cf2-4280-4210-9e7a-55d0bbfa6c6f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.084823103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=364b0983-f1e6-422d-94fe-3070e0f3685a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.084893736Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=364b0983-f1e6-422d-94fe-3070e0f3685a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:18:59 old-k8s-version-720064 crio[624]: time="2024-12-10 00:18:59.084942003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=364b0983-f1e6-422d-94fe-3070e0f3685a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 23:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049452] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044986] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.967295] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.057304] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.622707] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.503870] systemd-fstab-generator[550]: Ignoring "noauto" option for root device
	[  +0.058578] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077964] systemd-fstab-generator[562]: Ignoring "noauto" option for root device
	[  +0.212967] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.149461] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.273531] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +6.290551] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.069158] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.965394] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[ +12.784108] kauditd_printk_skb: 46 callbacks suppressed
	[Dec10 00:03] systemd-fstab-generator[5026]: Ignoring "noauto" option for root device
	[Dec10 00:05] systemd-fstab-generator[5316]: Ignoring "noauto" option for root device
	[  +0.059046] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 00:18:59 up 19 min,  0 users,  load average: 0.18, 0.07, 0.02
	Linux old-k8s-version-720064 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 10 00:18:57 old-k8s-version-720064 kubelet[6795]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc0009b4c60)
	Dec 10 00:18:57 old-k8s-version-720064 kubelet[6795]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Dec 10 00:18:57 old-k8s-version-720064 kubelet[6795]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Dec 10 00:18:57 old-k8s-version-720064 kubelet[6795]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Dec 10 00:18:57 old-k8s-version-720064 kubelet[6795]: goroutine 163 [select]:
	Dec 10 00:18:57 old-k8s-version-720064 kubelet[6795]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c8def0, 0x4f0ac20, 0xc0001013b0, 0x1, 0xc0001020c0)
	Dec 10 00:18:57 old-k8s-version-720064 kubelet[6795]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Dec 10 00:18:57 old-k8s-version-720064 kubelet[6795]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0006dac40, 0xc0001020c0)
	Dec 10 00:18:57 old-k8s-version-720064 kubelet[6795]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Dec 10 00:18:57 old-k8s-version-720064 kubelet[6795]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Dec 10 00:18:57 old-k8s-version-720064 kubelet[6795]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Dec 10 00:18:57 old-k8s-version-720064 kubelet[6795]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0009ce120, 0xc0009cc260)
	Dec 10 00:18:57 old-k8s-version-720064 kubelet[6795]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Dec 10 00:18:57 old-k8s-version-720064 kubelet[6795]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Dec 10 00:18:57 old-k8s-version-720064 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 10 00:18:57 old-k8s-version-720064 kubelet[6795]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Dec 10 00:18:57 old-k8s-version-720064 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 00:18:58 old-k8s-version-720064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 137.
	Dec 10 00:18:58 old-k8s-version-720064 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 10 00:18:58 old-k8s-version-720064 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 10 00:18:58 old-k8s-version-720064 kubelet[6821]: I1210 00:18:58.724938    6821 server.go:416] Version: v1.20.0
	Dec 10 00:18:58 old-k8s-version-720064 kubelet[6821]: I1210 00:18:58.725351    6821 server.go:837] Client rotation is on, will bootstrap in background
	Dec 10 00:18:58 old-k8s-version-720064 kubelet[6821]: I1210 00:18:58.727400    6821 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 10 00:18:58 old-k8s-version-720064 kubelet[6821]: I1210 00:18:58.728772    6821 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Dec 10 00:18:58 old-k8s-version-720064 kubelet[6821]: W1210 00:18:58.728929    6821 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-720064 -n old-k8s-version-720064
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-720064 -n old-k8s-version-720064: exit status 2 (238.701661ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-720064" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (128.16s)

                                                
                                    

Test pass (255/321)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.7
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.2/json-events 6.83
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.06
18 TestDownloadOnly/v1.31.2/DeleteAll 0.14
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.6
22 TestOffline 54.8
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 188.01
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 9.49
35 TestAddons/parallel/Registry 17.2
37 TestAddons/parallel/InspektorGadget 10.86
40 TestAddons/parallel/CSI 51.15
41 TestAddons/parallel/Headlamp 20.7
42 TestAddons/parallel/CloudSpanner 6.61
43 TestAddons/parallel/LocalPath 13.09
44 TestAddons/parallel/NvidiaDevicePlugin 7.09
45 TestAddons/parallel/Yakd 12.07
48 TestCertOptions 80.71
49 TestCertExpiration 291.73
51 TestForceSystemdFlag 46.68
52 TestForceSystemdEnv 67.77
54 TestKVMDriverInstallOrUpdate 3.99
58 TestErrorSpam/setup 40.19
59 TestErrorSpam/start 0.35
60 TestErrorSpam/status 0.73
61 TestErrorSpam/pause 1.51
62 TestErrorSpam/unpause 1.71
63 TestErrorSpam/stop 4.76
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 88.48
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 37.8
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.56
75 TestFunctional/serial/CacheCmd/cache/add_local 1.96
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 32.33
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.35
86 TestFunctional/serial/LogsFileCmd 1.36
87 TestFunctional/serial/InvalidService 4.48
89 TestFunctional/parallel/ConfigCmd 0.36
90 TestFunctional/parallel/DashboardCmd 27.4
91 TestFunctional/parallel/DryRun 0.28
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 1.02
97 TestFunctional/parallel/ServiceCmdConnect 8.66
98 TestFunctional/parallel/AddonsCmd 0.12
99 TestFunctional/parallel/PersistentVolumeClaim 48.32
101 TestFunctional/parallel/SSHCmd 0.39
102 TestFunctional/parallel/CpCmd 1.3
103 TestFunctional/parallel/MySQL 23.83
104 TestFunctional/parallel/FileSync 0.2
105 TestFunctional/parallel/CertSync 1.27
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
113 TestFunctional/parallel/License 0.29
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 0.61
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
119 TestFunctional/parallel/ImageCommands/ImageListYaml 2.22
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.92
121 TestFunctional/parallel/ImageCommands/Setup 1.54
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
125 TestFunctional/parallel/ServiceCmd/DeployApp 13.17
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.46
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.38
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.21
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.64
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.99
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.59
138 TestFunctional/parallel/ServiceCmd/List 0.51
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.61
141 TestFunctional/parallel/ServiceCmd/Format 0.33
142 TestFunctional/parallel/ServiceCmd/URL 0.31
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
150 TestFunctional/parallel/ProfileCmd/profile_list 0.41
151 TestFunctional/parallel/MountCmd/any-port 7.73
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
153 TestFunctional/parallel/MountCmd/specific-port 1.83
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.4
155 TestFunctional/delete_echo-server_images 0.03
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 194.81
162 TestMultiControlPlane/serial/DeployApp 5.87
163 TestMultiControlPlane/serial/PingHostFromPods 1.15
164 TestMultiControlPlane/serial/AddWorkerNode 57.99
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
167 TestMultiControlPlane/serial/CopyFile 12.84
173 TestMultiControlPlane/serial/DeleteSecondaryNode 16.5
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.6
176 TestMultiControlPlane/serial/RestartCluster 362.02
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.72
178 TestMultiControlPlane/serial/AddSecondaryNode 77.91
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
183 TestJSONOutput/start/Command 78.36
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.7
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.58
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.34
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 87.42
215 TestMountStart/serial/StartWithMountFirst 28.88
216 TestMountStart/serial/VerifyMountFirst 0.37
217 TestMountStart/serial/StartWithMountSecond 26.28
218 TestMountStart/serial/VerifyMountSecond 0.38
219 TestMountStart/serial/DeleteFirst 0.88
220 TestMountStart/serial/VerifyMountPostDelete 0.38
221 TestMountStart/serial/Stop 1.29
222 TestMountStart/serial/RestartStopped 23.52
223 TestMountStart/serial/VerifyMountPostStop 0.38
226 TestMultiNode/serial/FreshStart2Nodes 106.77
227 TestMultiNode/serial/DeployApp2Nodes 4.93
228 TestMultiNode/serial/PingHostFrom2Pods 0.78
229 TestMultiNode/serial/AddNode 52.01
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.58
232 TestMultiNode/serial/CopyFile 7.14
233 TestMultiNode/serial/StopNode 2.26
234 TestMultiNode/serial/StartAfterStop 38.48
236 TestMultiNode/serial/DeleteNode 1.94
238 TestMultiNode/serial/RestartMultiNode 206.37
239 TestMultiNode/serial/ValidateNameConflict 45.11
246 TestScheduledStopUnix 114.23
250 TestRunningBinaryUpgrade 199.39
254 TestStoppedBinaryUpgrade/Setup 0.74
258 TestStoppedBinaryUpgrade/Upgrade 167.54
263 TestNetworkPlugins/group/false 3.3
275 TestPause/serial/Start 96.82
276 TestPause/serial/SecondStartNoReconfiguration 41.17
277 TestStoppedBinaryUpgrade/MinikubeLogs 0.87
279 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
280 TestNoKubernetes/serial/StartWithK8s 42.54
281 TestPause/serial/Pause 0.73
282 TestPause/serial/VerifyStatus 0.25
283 TestPause/serial/Unpause 0.66
284 TestPause/serial/PauseAgain 1
285 TestPause/serial/DeletePaused 1
286 TestPause/serial/VerifyDeletedResources 11.64
287 TestNoKubernetes/serial/StartWithStopK8s 60.69
288 TestNoKubernetes/serial/Start 47.64
289 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
290 TestNoKubernetes/serial/ProfileList 1.02
291 TestNoKubernetes/serial/Stop 1.29
292 TestNoKubernetes/serial/StartNoArgs 59.74
293 TestNetworkPlugins/group/auto/Start 74.25
294 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
295 TestNetworkPlugins/group/kindnet/Start 96.51
296 TestNetworkPlugins/group/auto/KubeletFlags 0.2
297 TestNetworkPlugins/group/auto/NetCatPod 10.21
298 TestNetworkPlugins/group/auto/DNS 26.46
299 TestNetworkPlugins/group/auto/Localhost 0.11
300 TestNetworkPlugins/group/auto/HairPin 0.12
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
303 TestNetworkPlugins/group/kindnet/NetCatPod 11.21
304 TestNetworkPlugins/group/calico/Start 80.48
305 TestNetworkPlugins/group/kindnet/DNS 0.15
306 TestNetworkPlugins/group/kindnet/Localhost 0.13
307 TestNetworkPlugins/group/kindnet/HairPin 0.12
308 TestNetworkPlugins/group/custom-flannel/Start 81.61
309 TestNetworkPlugins/group/enable-default-cni/Start 79.88
310 TestNetworkPlugins/group/flannel/Start 101.55
311 TestNetworkPlugins/group/calico/ControllerPod 6.01
312 TestNetworkPlugins/group/calico/KubeletFlags 0.25
313 TestNetworkPlugins/group/calico/NetCatPod 13.31
314 TestNetworkPlugins/group/calico/DNS 0.17
315 TestNetworkPlugins/group/calico/Localhost 0.14
316 TestNetworkPlugins/group/calico/HairPin 0.12
317 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
318 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.27
319 TestNetworkPlugins/group/bridge/Start 93
320 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
321 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.23
322 TestNetworkPlugins/group/custom-flannel/DNS 0.17
323 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
324 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
331 TestStartStop/group/no-preload/serial/FirstStart 128.86
332 TestNetworkPlugins/group/flannel/ControllerPod 6.01
333 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
334 TestNetworkPlugins/group/flannel/NetCatPod 10.23
335 TestNetworkPlugins/group/flannel/DNS 0.16
336 TestNetworkPlugins/group/flannel/Localhost 0.12
337 TestNetworkPlugins/group/flannel/HairPin 0.15
339 TestStartStop/group/embed-certs/serial/FirstStart 85.3
340 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
341 TestNetworkPlugins/group/bridge/NetCatPod 10.24
342 TestNetworkPlugins/group/bridge/DNS 0.15
343 TestNetworkPlugins/group/bridge/Localhost 0.14
344 TestNetworkPlugins/group/bridge/HairPin 0.13
346 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 92.18
347 TestStartStop/group/no-preload/serial/DeployApp 9.28
348 TestStartStop/group/embed-certs/serial/DeployApp 8.3
349 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.97
351 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
353 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.25
354 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.89
360 TestStartStop/group/no-preload/serial/SecondStart 682.2
361 TestStartStop/group/embed-certs/serial/SecondStart 576.74
363 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 583.31
364 TestStartStop/group/old-k8s-version/serial/Stop 5.48
365 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
376 TestStartStop/group/newest-cni/serial/FirstStart 48.07
377 TestStartStop/group/newest-cni/serial/DeployApp 0
378 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.06
379 TestStartStop/group/newest-cni/serial/Stop 11.34
380 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
381 TestStartStop/group/newest-cni/serial/SecondStart 35.9
382 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
384 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
385 TestStartStop/group/newest-cni/serial/Pause 2.55
x
+
TestDownloadOnly/v1.20.0/json-events (13.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-091652 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-091652 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.694572147s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (13.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1209 22:32:09.301872   26253 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1209 22:32:09.301959   26253 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-091652
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-091652: exit status 85 (62.734368ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-091652 | jenkins | v1.34.0 | 09 Dec 24 22:31 UTC |          |
	|         | -p download-only-091652        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 22:31:55
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 22:31:55.646110   26265 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:31:55.646231   26265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:31:55.646240   26265 out.go:358] Setting ErrFile to fd 2...
	I1209 22:31:55.646244   26265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:31:55.646440   26265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	W1209 22:31:55.646564   26265 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19888-18950/.minikube/config/config.json: open /home/jenkins/minikube-integration/19888-18950/.minikube/config/config.json: no such file or directory
	I1209 22:31:55.647132   26265 out.go:352] Setting JSON to true
	I1209 22:31:55.648035   26265 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4467,"bootTime":1733779049,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 22:31:55.648155   26265 start.go:139] virtualization: kvm guest
	I1209 22:31:55.650400   26265 out.go:97] [download-only-091652] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1209 22:31:55.650515   26265 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball: no such file or directory
	I1209 22:31:55.650584   26265 notify.go:220] Checking for updates...
	I1209 22:31:55.651859   26265 out.go:169] MINIKUBE_LOCATION=19888
	I1209 22:31:55.653270   26265 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 22:31:55.654619   26265 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:31:55.656022   26265 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:31:55.657260   26265 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1209 22:31:55.659413   26265 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 22:31:55.659666   26265 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 22:31:55.770250   26265 out.go:97] Using the kvm2 driver based on user configuration
	I1209 22:31:55.770278   26265 start.go:297] selected driver: kvm2
	I1209 22:31:55.770284   26265 start.go:901] validating driver "kvm2" against <nil>
	I1209 22:31:55.770620   26265 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:31:55.770729   26265 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 22:31:55.786048   26265 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 22:31:55.786097   26265 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 22:31:55.786649   26265 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1209 22:31:55.786799   26265 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 22:31:55.786823   26265 cni.go:84] Creating CNI manager for ""
	I1209 22:31:55.786870   26265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 22:31:55.786879   26265 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 22:31:55.786920   26265 start.go:340] cluster config:
	{Name:download-only-091652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-091652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:31:55.787083   26265 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:31:55.788913   26265 out.go:97] Downloading VM boot image ...
	I1209 22:31:55.788945   26265 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19888-18950/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 22:32:01.622451   26265 out.go:97] Starting "download-only-091652" primary control-plane node in "download-only-091652" cluster
	I1209 22:32:01.622478   26265 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 22:32:01.648835   26265 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1209 22:32:01.648872   26265 cache.go:56] Caching tarball of preloaded images
	I1209 22:32:01.649034   26265 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 22:32:01.651134   26265 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1209 22:32:01.651169   26265 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1209 22:32:01.677827   26265 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-091652 host does not exist
	  To start a cluster, run: "minikube start -p download-only-091652"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-091652
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (6.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-578923 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-578923 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.826755772s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (6.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1209 22:32:16.473684   26253 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1209 22:32:16.473730   26253 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-578923
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-578923: exit status 85 (60.258456ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-091652 | jenkins | v1.34.0 | 09 Dec 24 22:31 UTC |                     |
	|         | -p download-only-091652        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 09 Dec 24 22:32 UTC | 09 Dec 24 22:32 UTC |
	| delete  | -p download-only-091652        | download-only-091652 | jenkins | v1.34.0 | 09 Dec 24 22:32 UTC | 09 Dec 24 22:32 UTC |
	| start   | -o=json --download-only        | download-only-578923 | jenkins | v1.34.0 | 09 Dec 24 22:32 UTC |                     |
	|         | -p download-only-578923        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 22:32:09
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 22:32:09.688329   26490 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:32:09.688456   26490 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:32:09.688465   26490 out.go:358] Setting ErrFile to fd 2...
	I1209 22:32:09.688471   26490 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:32:09.688665   26490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 22:32:09.689210   26490 out.go:352] Setting JSON to true
	I1209 22:32:09.690089   26490 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4481,"bootTime":1733779049,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 22:32:09.690169   26490 start.go:139] virtualization: kvm guest
	I1209 22:32:09.692482   26490 out.go:97] [download-only-578923] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 22:32:09.692731   26490 notify.go:220] Checking for updates...
	I1209 22:32:09.694224   26490 out.go:169] MINIKUBE_LOCATION=19888
	I1209 22:32:09.695856   26490 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 22:32:09.697338   26490 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:32:09.698776   26490 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:32:09.700139   26490 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1209 22:32:09.702780   26490 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 22:32:09.703005   26490 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 22:32:09.736183   26490 out.go:97] Using the kvm2 driver based on user configuration
	I1209 22:32:09.736222   26490 start.go:297] selected driver: kvm2
	I1209 22:32:09.736231   26490 start.go:901] validating driver "kvm2" against <nil>
	I1209 22:32:09.736623   26490 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:32:09.736718   26490 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19888-18950/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 22:32:09.753670   26490 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 22:32:09.753951   26490 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 22:32:09.754518   26490 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1209 22:32:09.754655   26490 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 22:32:09.754682   26490 cni.go:84] Creating CNI manager for ""
	I1209 22:32:09.754728   26490 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 22:32:09.754738   26490 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 22:32:09.754789   26490 start.go:340] cluster config:
	{Name:download-only-578923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-578923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:32:09.754885   26490 iso.go:125] acquiring lock: {Name:mk12aed5d94c30496ac8e1e058352ee84717d8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 22:32:09.756596   26490 out.go:97] Starting "download-only-578923" primary control-plane node in "download-only-578923" cluster
	I1209 22:32:09.756616   26490 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:32:09.822662   26490 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 22:32:09.822697   26490 cache.go:56] Caching tarball of preloaded images
	I1209 22:32:09.822835   26490 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 22:32:09.824739   26490 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1209 22:32:09.824757   26490 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1209 22:32:09.852677   26490 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/19888-18950/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-578923 host does not exist
	  To start a cluster, run: "minikube start -p download-only-578923"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-578923
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1209 22:32:17.057088   26253 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-501847 --alsologtostderr --binary-mirror http://127.0.0.1:39247 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-501847" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-501847
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (54.8s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-988358 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-988358 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (53.728663589s)
helpers_test.go:175: Cleaning up "offline-crio-988358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-988358
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-988358: (1.066157823s)
--- PASS: TestOffline (54.80s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-495659
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-495659: exit status 85 (50.626316ms)

                                                
                                                
-- stdout --
	* Profile "addons-495659" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-495659"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-495659
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-495659: exit status 85 (52.840334ms)

                                                
                                                
-- stdout --
	* Profile "addons-495659" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-495659"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (188.01s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-495659 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-495659 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m8.005836943s)
--- PASS: TestAddons/Setup (188.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-495659 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-495659 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-495659 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-495659 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [305d5fb6-4c01-480c-9f96-855b1c53733a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [305d5fb6-4c01-480c-9f96-855b1c53733a] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004613398s
addons_test.go:633: (dbg) Run:  kubectl --context addons-495659 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-495659 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-495659 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.019092ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5cc95cd69-m98x5" [ecb1f96a-9905-45be-b670-6791c5067c07] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003158119s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xqgz7" [8103c584-faf4-4900-8fda-b5367b887c19] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.077059082s
addons_test.go:331: (dbg) Run:  kubectl --context addons-495659 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-495659 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-495659 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.305951646s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-495659 ip
2024/12/09 22:36:00 [DEBUG] GET http://192.168.39.123:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-495659 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.20s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jz9pw" [d79772f2-7ffc-453d-85a2-356bda885e32] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0645765s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-495659 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-495659 addons disable inspektor-gadget --alsologtostderr -v=1: (5.792379765s)
--- PASS: TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.15s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1209 22:36:02.611952   26253 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1209 22:36:02.617965   26253 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1209 22:36:02.617990   26253 kapi.go:107] duration metric: took 6.052865ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 6.060221ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-495659 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-495659 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [dd21afa2-f877-4d17-b8b6-74f4d0ba04ab] Pending
helpers_test.go:344: "task-pv-pod" [dd21afa2-f877-4d17-b8b6-74f4d0ba04ab] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [dd21afa2-f877-4d17-b8b6-74f4d0ba04ab] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003562814s
addons_test.go:511: (dbg) Run:  kubectl --context addons-495659 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-495659 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-495659 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-495659 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-495659 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-495659 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-495659 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b451504a-35d1-4bd8-bdda-759aeb5a6b39] Pending
helpers_test.go:344: "task-pv-pod-restore" [b451504a-35d1-4bd8-bdda-759aeb5a6b39] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b451504a-35d1-4bd8-bdda-759aeb5a6b39] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004538008s
addons_test.go:553: (dbg) Run:  kubectl --context addons-495659 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-495659 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-495659 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-495659 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-495659 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-495659 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.766071693s)
--- PASS: TestAddons/parallel/CSI (51.15s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-495659 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-2bt4l" [07df8260-4fee-4b46-940a-ab35df2a9ca3] Pending
helpers_test.go:344: "headlamp-cd8ffd6fc-2bt4l" [07df8260-4fee-4b46-940a-ab35df2a9ca3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-2bt4l" [07df8260-4fee-4b46-940a-ab35df2a9ca3] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.006284213s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-495659 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-495659 addons disable headlamp --alsologtostderr -v=1: (5.826916905s)
--- PASS: TestAddons/parallel/Headlamp (20.70s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-6rtfn" [7cf5ab15-c579-4211-baf4-0310f177db15] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004138173s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-495659 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.09s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-495659 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-495659 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-495659 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [99f253e8-6f92-40f8-8b4d-a9bc4f2e6477] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [99f253e8-6f92-40f8-8b4d-a9bc4f2e6477] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [99f253e8-6f92-40f8-8b4d-a9bc4f2e6477] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004048424s
addons_test.go:906: (dbg) Run:  kubectl --context addons-495659 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-495659 ssh "cat /opt/local-path-provisioner/pvc-d50f59cb-64dd-4a2e-b94c-429fc96e21da_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-495659 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-495659 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-495659 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (13.09s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.09s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-wbphv" [373a99a7-1c49-427a-931d-f6d3bcb7cc29] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004902014s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-495659 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-495659 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.080508949s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.09s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-c2vw2" [ba2a4b8c-0700-4d46-95d2-105934b1147e] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004709102s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-495659 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-495659 addons disable yakd --alsologtostderr -v=1: (6.063415922s)
--- PASS: TestAddons/parallel/Yakd (12.07s)

                                                
                                    
x
+
TestCertOptions (80.71s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-604351 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-604351 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m19.46139294s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-604351 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-604351 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-604351 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-604351" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-604351
--- PASS: TestCertOptions (80.71s)

                                                
                                    
x
+
TestCertExpiration (291.73s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-801840 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E1209 23:42:55.594442   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-801840 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (57.68888457s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-801840 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-801840 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (53.021559247s)
helpers_test.go:175: Cleaning up "cert-expiration-801840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-801840
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-801840: (1.022709389s)
--- PASS: TestCertExpiration (291.73s)

                                                
                                    
x
+
TestForceSystemdFlag (46.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-386973 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-386973 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.687652388s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-386973 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-386973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-386973
--- PASS: TestForceSystemdFlag (46.68s)

                                                
                                    
x
+
TestForceSystemdEnv (67.77s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-643543 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-643543 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m6.754602684s)
helpers_test.go:175: Cleaning up "force-systemd-env-643543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-643543
E1209 23:43:12.522702   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-643543: (1.015053774s)
--- PASS: TestForceSystemdEnv (67.77s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.99s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1209 23:42:01.642833   26253 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1209 23:42:01.642967   26253 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1209 23:42:01.674366   26253 install.go:62] docker-machine-driver-kvm2: exit status 1
W1209 23:42:01.674725   26253 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1209 23:42:01.674792   26253 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate332637024/001/docker-machine-driver-kvm2
I1209 23:42:01.937258   26253 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate332637024/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020] Decompressors:map[bz2:0xc000014180 gz:0xc000014188 tar:0xc0004f1ff0 tar.bz2:0xc000014140 tar.gz:0xc000014150 tar.xz:0xc000014160 tar.zst:0xc000014170 tbz2:0xc000014140 tgz:0xc000014150 txz:0xc000014160 tzst:0xc000014170 xz:0xc000014190 zip:0xc0000141a0 zst:0xc000014198] Getters:map[file:0xc00218c6d0 http:0xc00099c5a0 https:0xc00099c5f0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I1209 23:42:01.937323   26253 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate332637024/001/docker-machine-driver-kvm2
I1209 23:42:03.869913   26253 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1209 23:42:03.869995   26253 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1209 23:42:03.905740   26253 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1209 23:42:03.905782   26253 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1209 23:42:03.905848   26253 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1209 23:42:03.905887   26253 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate332637024/002/docker-machine-driver-kvm2
I1209 23:42:03.960442   26253 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate332637024/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020] Decompressors:map[bz2:0xc000014180 gz:0xc000014188 tar:0xc0004f1ff0 tar.bz2:0xc000014140 tar.gz:0xc000014150 tar.xz:0xc000014160 tar.zst:0xc000014170 tbz2:0xc000014140 tgz:0xc000014150 txz:0xc000014160 tzst:0xc000014170 xz:0xc000014190 zip:0xc0000141a0 zst:0xc000014198] Getters:map[file:0xc00218d980 http:0xc00076b400 https:0xc00076b450] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I1209 23:42:03.960509   26253 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate332637024/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.99s)

                                                
                                    
x
+
TestErrorSpam/setup (40.19s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-916342 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-916342 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-916342 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-916342 --driver=kvm2  --container-runtime=crio: (40.185553575s)
--- PASS: TestErrorSpam/setup (40.19s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916342 --log_dir /tmp/nospam-916342 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916342 --log_dir /tmp/nospam-916342 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916342 --log_dir /tmp/nospam-916342 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916342 --log_dir /tmp/nospam-916342 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916342 --log_dir /tmp/nospam-916342 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916342 --log_dir /tmp/nospam-916342 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916342 --log_dir /tmp/nospam-916342 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916342 --log_dir /tmp/nospam-916342 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916342 --log_dir /tmp/nospam-916342 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916342 --log_dir /tmp/nospam-916342 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916342 --log_dir /tmp/nospam-916342 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916342 --log_dir /tmp/nospam-916342 unpause
--- PASS: TestErrorSpam/unpause (1.71s)

                                                
                                    
x
+
TestErrorSpam/stop (4.76s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916342 --log_dir /tmp/nospam-916342 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-916342 --log_dir /tmp/nospam-916342 stop: (1.648944585s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916342 --log_dir /tmp/nospam-916342 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-916342 --log_dir /tmp/nospam-916342 stop: (1.658089158s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916342 --log_dir /tmp/nospam-916342 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-916342 --log_dir /tmp/nospam-916342 stop: (1.451788929s)
--- PASS: TestErrorSpam/stop (4.76s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19888-18950/.minikube/files/etc/test/nested/copy/26253/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (88.48s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-967202 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1209 22:45:26.333029   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:45:26.339458   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:45:26.350874   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:45:26.372308   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:45:26.413773   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:45:26.495249   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:45:26.656890   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:45:26.978663   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:45:27.620738   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:45:28.902323   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:45:31.465250   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:45:36.587078   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:45:46.828486   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:46:07.310370   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-967202 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m28.48352989s)
--- PASS: TestFunctional/serial/StartWithProxy (88.48s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.8s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1209 22:46:46.545323   26253 config.go:182] Loaded profile config "functional-967202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-967202 --alsologtostderr -v=8
E1209 22:46:48.272070   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-967202 --alsologtostderr -v=8: (37.796418967s)
functional_test.go:663: soft start took 37.797127006s for "functional-967202" cluster.
I1209 22:47:24.342068   26253 config.go:182] Loaded profile config "functional-967202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (37.80s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-967202 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-967202 cache add registry.k8s.io/pause:3.1: (1.207454506s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-967202 cache add registry.k8s.io/pause:3.3: (1.18407754s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-967202 cache add registry.k8s.io/pause:latest: (1.17230867s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-967202 /tmp/TestFunctionalserialCacheCmdcacheadd_local593498724/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 cache add minikube-local-cache-test:functional-967202
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-967202 cache add minikube-local-cache-test:functional-967202: (1.645834727s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 cache delete minikube-local-cache-test:functional-967202
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-967202
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-967202 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (207.892401ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-967202 cache reload: (1.035519029s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 kubectl -- --context functional-967202 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-967202 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.33s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-967202 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-967202 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.325714253s)
functional_test.go:761: restart took 32.325839757s for "functional-967202" cluster.
I1209 22:48:04.614521   26253 config.go:182] Loaded profile config "functional-967202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (32.33s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-967202 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-967202 logs: (1.353732744s)
--- PASS: TestFunctional/serial/LogsCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 logs --file /tmp/TestFunctionalserialLogsFileCmd2235735003/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-967202 logs --file /tmp/TestFunctionalserialLogsFileCmd2235735003/001/logs.txt: (1.359351368s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-967202 apply -f testdata/invalidsvc.yaml
E1209 22:48:10.195177   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-967202
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-967202: exit status 115 (271.000303ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.72:30419 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-967202 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-967202 delete -f testdata/invalidsvc.yaml: (1.010938119s)
--- PASS: TestFunctional/serial/InvalidService (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-967202 config get cpus: exit status 14 (54.09061ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-967202 config get cpus: exit status 14 (65.170219ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (27.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-967202 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-967202 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 35779: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (27.40s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-967202 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-967202 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.010019ms)

                                                
                                                
-- stdout --
	* [functional-967202] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 22:48:29.816056   35546 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:48:29.816298   35546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:48:29.816307   35546 out.go:358] Setting ErrFile to fd 2...
	I1209 22:48:29.816311   35546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:48:29.816512   35546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 22:48:29.817040   35546 out.go:352] Setting JSON to false
	I1209 22:48:29.818015   35546 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5461,"bootTime":1733779049,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 22:48:29.818071   35546 start.go:139] virtualization: kvm guest
	I1209 22:48:29.820349   35546 out.go:177] * [functional-967202] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 22:48:29.821636   35546 notify.go:220] Checking for updates...
	I1209 22:48:29.821673   35546 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 22:48:29.822947   35546 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 22:48:29.824099   35546 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:48:29.825393   35546 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:48:29.826639   35546 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 22:48:29.827802   35546 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 22:48:29.829230   35546 config.go:182] Loaded profile config "functional-967202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:48:29.829596   35546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:48:29.829647   35546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:48:29.847412   35546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42077
	I1209 22:48:29.847989   35546 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:48:29.848525   35546 main.go:141] libmachine: Using API Version  1
	I1209 22:48:29.848549   35546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:48:29.848881   35546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:48:29.849049   35546 main.go:141] libmachine: (functional-967202) Calling .DriverName
	I1209 22:48:29.849281   35546 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 22:48:29.849692   35546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:48:29.849742   35546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:48:29.866199   35546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42977
	I1209 22:48:29.866720   35546 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:48:29.867315   35546 main.go:141] libmachine: Using API Version  1
	I1209 22:48:29.867335   35546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:48:29.867714   35546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:48:29.867919   35546 main.go:141] libmachine: (functional-967202) Calling .DriverName
	I1209 22:48:29.903877   35546 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 22:48:29.905125   35546 start.go:297] selected driver: kvm2
	I1209 22:48:29.905139   35546 start.go:901] validating driver "kvm2" against &{Name:functional-967202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-967202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.72 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:48:29.905229   35546 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 22:48:29.907430   35546 out.go:201] 
	W1209 22:48:29.908516   35546 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 22:48:29.909642   35546 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-967202 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-967202 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-967202 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (150.35614ms)

                                                
                                                
-- stdout --
	* [functional-967202] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 22:48:30.107543   35637 out.go:345] Setting OutFile to fd 1 ...
	I1209 22:48:30.107674   35637 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:48:30.107684   35637 out.go:358] Setting ErrFile to fd 2...
	I1209 22:48:30.107688   35637 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 22:48:30.107949   35637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 22:48:30.108450   35637 out.go:352] Setting JSON to false
	I1209 22:48:30.109460   35637 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5461,"bootTime":1733779049,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 22:48:30.109547   35637 start.go:139] virtualization: kvm guest
	I1209 22:48:30.111831   35637 out.go:177] * [functional-967202] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1209 22:48:30.113616   35637 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 22:48:30.113635   35637 notify.go:220] Checking for updates...
	I1209 22:48:30.116066   35637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 22:48:30.117381   35637 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 22:48:30.118729   35637 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 22:48:30.120048   35637 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 22:48:30.121304   35637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 22:48:30.122954   35637 config.go:182] Loaded profile config "functional-967202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 22:48:30.123591   35637 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:48:30.123680   35637 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:48:30.139180   35637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37481
	I1209 22:48:30.139681   35637 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:48:30.140317   35637 main.go:141] libmachine: Using API Version  1
	I1209 22:48:30.140342   35637 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:48:30.140778   35637 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:48:30.140972   35637 main.go:141] libmachine: (functional-967202) Calling .DriverName
	I1209 22:48:30.141262   35637 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 22:48:30.141694   35637 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 22:48:30.141748   35637 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 22:48:30.155914   35637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34369
	I1209 22:48:30.156306   35637 main.go:141] libmachine: () Calling .GetVersion
	I1209 22:48:30.156957   35637 main.go:141] libmachine: Using API Version  1
	I1209 22:48:30.156980   35637 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 22:48:30.157345   35637 main.go:141] libmachine: () Calling .GetMachineName
	I1209 22:48:30.157605   35637 main.go:141] libmachine: (functional-967202) Calling .DriverName
	I1209 22:48:30.195084   35637 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1209 22:48:30.196248   35637 start.go:297] selected driver: kvm2
	I1209 22:48:30.196262   35637 start.go:901] validating driver "kvm2" against &{Name:functional-967202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-967202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.72 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 22:48:30.196375   35637 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 22:48:30.198425   35637 out.go:201] 
	W1209 22:48:30.199751   35637 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 22:48:30.200935   35637 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-967202 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-967202 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-mfxj4" [319dfc4c-1a82-4810-b996-dd19535bba52] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-mfxj4" [319dfc4c-1a82-4810-b996-dd19535bba52] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.0362851s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.50.72:30893
functional_test.go:1675: http://192.168.50.72:30893: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-mfxj4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.72:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.72:30893
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [67dc94ff-7c5b-4e57-b3c5-1aa0d2a48018] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003553039s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-967202 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-967202 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-967202 get pvc myclaim -o=json
I1209 22:48:19.799773   26253 retry.go:31] will retry after 2.677598439s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:746ef93d-4a0f-4c6c-8240-29a047d92bcc ResourceVersion:746 Generation:0 CreationTimestamp:2024-12-09 22:48:19 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-746ef93d-4a0f-4c6c-8240-29a047d92bcc StorageClassName:0xc001ba0300 VolumeMode:0xc001ba0310 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-967202 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-967202 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2daa4fce-1ac7-43cf-9b07-0f17433fac41] Pending
helpers_test.go:344: "sp-pod" [2daa4fce-1ac7-43cf-9b07-0f17433fac41] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2daa4fce-1ac7-43cf-9b07-0f17433fac41] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003647422s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-967202 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-967202 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-967202 delete -f testdata/storage-provisioner/pod.yaml: (4.303187334s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-967202 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8861652d-53eb-4426-8ed9-70b57661f55f] Pending
helpers_test.go:344: "sp-pod" [8861652d-53eb-4426-8ed9-70b57661f55f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8861652d-53eb-4426-8ed9-70b57661f55f] Running
2024/12/09 22:48:57 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.003589754s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-967202 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.32s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh -n functional-967202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 cp functional-967202:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1551125472/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh -n functional-967202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh -n functional-967202 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-967202 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-58d2l" [1db59836-371b-4e45-8197-036d066339c6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-58d2l" [1db59836-371b-4e45-8197-036d066339c6] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.008222785s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-967202 exec mysql-6cdb49bbb-58d2l -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-967202 exec mysql-6cdb49bbb-58d2l -- mysql -ppassword -e "show databases;": exit status 1 (127.416658ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 22:48:52.517945   26253 retry.go:31] will retry after 1.342882735s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-967202 exec mysql-6cdb49bbb-58d2l -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.83s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/26253/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "sudo cat /etc/test/nested/copy/26253/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/26253.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "sudo cat /etc/ssl/certs/26253.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/26253.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "sudo cat /usr/share/ca-certificates/26253.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/262532.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "sudo cat /etc/ssl/certs/262532.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/262532.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "sudo cat /usr/share/ca-certificates/262532.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-967202 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-967202 ssh "sudo systemctl is-active docker": exit status 1 (246.825425ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-967202 ssh "sudo systemctl is-active containerd": exit status 1 (236.153209ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-967202 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| localhost/kicbase/echo-server           | functional-967202  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| docker.io/library/nginx                 | alpine             | 91ca84b4f5779 | 54MB   |
| docker.io/library/nginx                 | latest             | 66f8bdd3810c9 | 196MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/minikube-local-cache-test     | functional-967202  | 61af457a0b211 | 3.33kB |
| localhost/my-image                      | functional-967202  | 886e2e2552b4d | 1.47MB |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-967202 image ls --format table --alsologtostderr:
I1209 22:48:49.532296   36575 out.go:345] Setting OutFile to fd 1 ...
I1209 22:48:49.532413   36575 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:48:49.532425   36575 out.go:358] Setting ErrFile to fd 2...
I1209 22:48:49.532432   36575 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:48:49.532625   36575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
I1209 22:48:49.533234   36575 config.go:182] Loaded profile config "functional-967202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 22:48:49.533355   36575 config.go:182] Loaded profile config "functional-967202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 22:48:49.533731   36575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 22:48:49.533787   36575 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 22:48:49.548304   36575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
I1209 22:48:49.548798   36575 main.go:141] libmachine: () Calling .GetVersion
I1209 22:48:49.549375   36575 main.go:141] libmachine: Using API Version  1
I1209 22:48:49.549398   36575 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 22:48:49.549802   36575 main.go:141] libmachine: () Calling .GetMachineName
I1209 22:48:49.549993   36575 main.go:141] libmachine: (functional-967202) Calling .GetState
I1209 22:48:49.552165   36575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 22:48:49.552232   36575 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 22:48:49.567645   36575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40427
I1209 22:48:49.568076   36575 main.go:141] libmachine: () Calling .GetVersion
I1209 22:48:49.568541   36575 main.go:141] libmachine: Using API Version  1
I1209 22:48:49.568566   36575 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 22:48:49.568913   36575 main.go:141] libmachine: () Calling .GetMachineName
I1209 22:48:49.569088   36575 main.go:141] libmachine: (functional-967202) Calling .DriverName
I1209 22:48:49.569288   36575 ssh_runner.go:195] Run: systemctl --version
I1209 22:48:49.569314   36575 main.go:141] libmachine: (functional-967202) Calling .GetSSHHostname
I1209 22:48:49.571989   36575 main.go:141] libmachine: (functional-967202) DBG | domain functional-967202 has defined MAC address 52:54:00:af:0a:5b in network mk-functional-967202
I1209 22:48:49.572391   36575 main.go:141] libmachine: (functional-967202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:0a:5b", ip: ""} in network mk-functional-967202: {Iface:virbr1 ExpiryTime:2024-12-09 23:45:32 +0000 UTC Type:0 Mac:52:54:00:af:0a:5b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:functional-967202 Clientid:01:52:54:00:af:0a:5b}
I1209 22:48:49.572428   36575 main.go:141] libmachine: (functional-967202) DBG | domain functional-967202 has defined IP address 192.168.50.72 and MAC address 52:54:00:af:0a:5b in network mk-functional-967202
I1209 22:48:49.572595   36575 main.go:141] libmachine: (functional-967202) Calling .GetSSHPort
I1209 22:48:49.572771   36575 main.go:141] libmachine: (functional-967202) Calling .GetSSHKeyPath
I1209 22:48:49.572908   36575 main.go:141] libmachine: (functional-967202) Calling .GetSSHUsername
I1209 22:48:49.573052   36575 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/functional-967202/id_rsa Username:docker}
I1209 22:48:49.691082   36575 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 22:48:49.817571   36575 main.go:141] libmachine: Making call to close driver server
I1209 22:48:49.817617   36575 main.go:141] libmachine: (functional-967202) Calling .Close
I1209 22:48:49.817879   36575 main.go:141] libmachine: Successfully made call to close driver server
I1209 22:48:49.817899   36575 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 22:48:49.817912   36575 main.go:141] libmachine: Making call to close driver server
I1209 22:48:49.817920   36575 main.go:141] libmachine: (functional-967202) Calling .Close
I1209 22:48:49.817918   36575 main.go:141] libmachine: (functional-967202) DBG | Closing plugin on server side
I1209 22:48:49.818107   36575 main.go:141] libmachine: Successfully made call to close driver server
I1209 22:48:49.818119   36575 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-967202 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"91ca84b4f57794f97f70443afccff26aed771e36bc4
8bad1e26c2ce66124ea66","repoDigests":["docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4","docker.io/library/nginx@sha256:b1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371"],"repoTags":["docker.io/library/nginx:alpine"],"size":"53958631"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage
-provisioner:v5"],"size":"31470524"},{"id":"61af457a0b2112c3897f19e0e32a45bbc30e587f4bc54888a906c9b60e231147","repoDigests":["localhost/minikube-local-cache-test@sha256:e7a40691a808afef5ceedad2a71b2f54f0ab672def91be16fc1440148cc7be56"],"repoTags":["localhost/minikube-local-cache-test:functional-967202"],"size":"3326"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"b32d4414db429007e0eb3df01a3179157e876f52d6b39a7f8e041096f33e3230","repoDigests":["docker.io/library/0637e7b95cf2b07a2d706f46c378cfde07086fa64c360081aa481776ee68cf5e-tmp@sha256:b7217fb06048602dd488d7ec882c87deb4f2c0c434b846eaf7c447ca6904858c"],"repoTags":[],"size":"1466018"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDige
sts":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-967202"],"size":"4943877"},{"id":"886e2e2552b4da2b1c785d811ddbc754106a52bd16dbfdd08b8d7dff9ade98f0","repoDigests":["localhost/my-image@sha256:c95f6aa4a972b097e4b08d2513fac33781550fafc3b1670a3a6986413d46fd1e"],"repoTags":["localhost/my-image:functional-967202"],"size":"1468600"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause
:3.3"],"size":"686139"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128
d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e","repoDigests":["docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42","docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"195919252"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac28746
3b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:
3.1"],"size":"746911"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-967202 image ls --format json --alsologtostderr:
I1209 22:48:49.227750   36552 out.go:345] Setting OutFile to fd 1 ...
I1209 22:48:49.227880   36552 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:48:49.227890   36552 out.go:358] Setting ErrFile to fd 2...
I1209 22:48:49.227896   36552 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:48:49.228186   36552 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
I1209 22:48:49.228994   36552 config.go:182] Loaded profile config "functional-967202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 22:48:49.229137   36552 config.go:182] Loaded profile config "functional-967202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 22:48:49.229691   36552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 22:48:49.229746   36552 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 22:48:49.245430   36552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33943
I1209 22:48:49.245959   36552 main.go:141] libmachine: () Calling .GetVersion
I1209 22:48:49.246642   36552 main.go:141] libmachine: Using API Version  1
I1209 22:48:49.246670   36552 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 22:48:49.247020   36552 main.go:141] libmachine: () Calling .GetMachineName
I1209 22:48:49.247197   36552 main.go:141] libmachine: (functional-967202) Calling .GetState
I1209 22:48:49.249045   36552 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 22:48:49.249085   36552 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 22:48:49.263656   36552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
I1209 22:48:49.264121   36552 main.go:141] libmachine: () Calling .GetVersion
I1209 22:48:49.264617   36552 main.go:141] libmachine: Using API Version  1
I1209 22:48:49.264637   36552 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 22:48:49.264920   36552 main.go:141] libmachine: () Calling .GetMachineName
I1209 22:48:49.265089   36552 main.go:141] libmachine: (functional-967202) Calling .DriverName
I1209 22:48:49.265264   36552 ssh_runner.go:195] Run: systemctl --version
I1209 22:48:49.265288   36552 main.go:141] libmachine: (functional-967202) Calling .GetSSHHostname
I1209 22:48:49.267837   36552 main.go:141] libmachine: (functional-967202) DBG | domain functional-967202 has defined MAC address 52:54:00:af:0a:5b in network mk-functional-967202
I1209 22:48:49.268258   36552 main.go:141] libmachine: (functional-967202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:0a:5b", ip: ""} in network mk-functional-967202: {Iface:virbr1 ExpiryTime:2024-12-09 23:45:32 +0000 UTC Type:0 Mac:52:54:00:af:0a:5b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:functional-967202 Clientid:01:52:54:00:af:0a:5b}
I1209 22:48:49.268290   36552 main.go:141] libmachine: (functional-967202) DBG | domain functional-967202 has defined IP address 192.168.50.72 and MAC address 52:54:00:af:0a:5b in network mk-functional-967202
I1209 22:48:49.268410   36552 main.go:141] libmachine: (functional-967202) Calling .GetSSHPort
I1209 22:48:49.268569   36552 main.go:141] libmachine: (functional-967202) Calling .GetSSHKeyPath
I1209 22:48:49.268708   36552 main.go:141] libmachine: (functional-967202) Calling .GetSSHUsername
I1209 22:48:49.268859   36552 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/functional-967202/id_rsa Username:docker}
I1209 22:48:49.393783   36552 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 22:48:49.481466   36552 main.go:141] libmachine: Making call to close driver server
I1209 22:48:49.481483   36552 main.go:141] libmachine: (functional-967202) Calling .Close
I1209 22:48:49.481858   36552 main.go:141] libmachine: Successfully made call to close driver server
I1209 22:48:49.481880   36552 main.go:141] libmachine: (functional-967202) DBG | Closing plugin on server side
I1209 22:48:49.481897   36552 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 22:48:49.481916   36552 main.go:141] libmachine: Making call to close driver server
I1209 22:48:49.481929   36552 main.go:141] libmachine: (functional-967202) Calling .Close
I1209 22:48:49.482167   36552 main.go:141] libmachine: (functional-967202) DBG | Closing plugin on server side
I1209 22:48:49.482251   36552 main.go:141] libmachine: Successfully made call to close driver server
I1209 22:48:49.482285   36552 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 image ls --format yaml --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p functional-967202 image ls --format yaml --alsologtostderr: (2.218400117s)
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-967202 image ls --format yaml --alsologtostderr:
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66
repoDigests:
- docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4
- docker.io/library/nginx@sha256:b1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371
repoTags:
- docker.io/library/nginx:alpine
size: "53958631"
- id: 66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e
repoDigests:
- docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "195919252"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 61af457a0b2112c3897f19e0e32a45bbc30e587f4bc54888a906c9b60e231147
repoDigests:
- localhost/minikube-local-cache-test@sha256:e7a40691a808afef5ceedad2a71b2f54f0ab672def91be16fc1440148cc7be56
repoTags:
- localhost/minikube-local-cache-test:functional-967202
size: "3326"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-967202
size: "4943877"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-967202 image ls --format yaml --alsologtostderr:
I1209 22:48:43.082578   36420 out.go:345] Setting OutFile to fd 1 ...
I1209 22:48:43.082714   36420 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:48:43.082725   36420 out.go:358] Setting ErrFile to fd 2...
I1209 22:48:43.082731   36420 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:48:43.082906   36420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
I1209 22:48:43.083513   36420 config.go:182] Loaded profile config "functional-967202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 22:48:43.083648   36420 config.go:182] Loaded profile config "functional-967202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 22:48:43.084025   36420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 22:48:43.084077   36420 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 22:48:43.099024   36420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44203
I1209 22:48:43.099541   36420 main.go:141] libmachine: () Calling .GetVersion
I1209 22:48:43.100098   36420 main.go:141] libmachine: Using API Version  1
I1209 22:48:43.100121   36420 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 22:48:43.100514   36420 main.go:141] libmachine: () Calling .GetMachineName
I1209 22:48:43.100698   36420 main.go:141] libmachine: (functional-967202) Calling .GetState
I1209 22:48:43.102739   36420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 22:48:43.102788   36420 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 22:48:43.117351   36420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33643
I1209 22:48:43.117869   36420 main.go:141] libmachine: () Calling .GetVersion
I1209 22:48:43.118366   36420 main.go:141] libmachine: Using API Version  1
I1209 22:48:43.118391   36420 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 22:48:43.118716   36420 main.go:141] libmachine: () Calling .GetMachineName
I1209 22:48:43.118938   36420 main.go:141] libmachine: (functional-967202) Calling .DriverName
I1209 22:48:43.119137   36420 ssh_runner.go:195] Run: systemctl --version
I1209 22:48:43.119161   36420 main.go:141] libmachine: (functional-967202) Calling .GetSSHHostname
I1209 22:48:43.122243   36420 main.go:141] libmachine: (functional-967202) DBG | domain functional-967202 has defined MAC address 52:54:00:af:0a:5b in network mk-functional-967202
I1209 22:48:43.122648   36420 main.go:141] libmachine: (functional-967202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:0a:5b", ip: ""} in network mk-functional-967202: {Iface:virbr1 ExpiryTime:2024-12-09 23:45:32 +0000 UTC Type:0 Mac:52:54:00:af:0a:5b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:functional-967202 Clientid:01:52:54:00:af:0a:5b}
I1209 22:48:43.122679   36420 main.go:141] libmachine: (functional-967202) DBG | domain functional-967202 has defined IP address 192.168.50.72 and MAC address 52:54:00:af:0a:5b in network mk-functional-967202
I1209 22:48:43.122773   36420 main.go:141] libmachine: (functional-967202) Calling .GetSSHPort
I1209 22:48:43.122918   36420 main.go:141] libmachine: (functional-967202) Calling .GetSSHKeyPath
I1209 22:48:43.123045   36420 main.go:141] libmachine: (functional-967202) Calling .GetSSHUsername
I1209 22:48:43.123173   36420 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/functional-967202/id_rsa Username:docker}
I1209 22:48:43.232832   36420 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 22:48:45.249015   36420 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.016141241s)
I1209 22:48:45.250010   36420 main.go:141] libmachine: Making call to close driver server
I1209 22:48:45.250030   36420 main.go:141] libmachine: (functional-967202) Calling .Close
I1209 22:48:45.250316   36420 main.go:141] libmachine: (functional-967202) DBG | Closing plugin on server side
I1209 22:48:45.250351   36420 main.go:141] libmachine: Successfully made call to close driver server
I1209 22:48:45.250360   36420 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 22:48:45.250371   36420 main.go:141] libmachine: Making call to close driver server
I1209 22:48:45.250383   36420 main.go:141] libmachine: (functional-967202) Calling .Close
I1209 22:48:45.250575   36420 main.go:141] libmachine: Successfully made call to close driver server
I1209 22:48:45.250593   36420 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 22:48:45.250594   36420 main.go:141] libmachine: (functional-967202) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-967202 ssh pgrep buildkitd: exit status 1 (232.793385ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 image build -t localhost/my-image:functional-967202 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-967202 image build -t localhost/my-image:functional-967202 testdata/build --alsologtostderr: (3.382350938s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-967202 image build -t localhost/my-image:functional-967202 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b32d4414db4
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-967202
--> 886e2e2552b
Successfully tagged localhost/my-image:functional-967202
886e2e2552b4da2b1c785d811ddbc754106a52bd16dbfdd08b8d7dff9ade98f0
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-967202 image build -t localhost/my-image:functional-967202 testdata/build --alsologtostderr:
I1209 22:48:45.536176   36489 out.go:345] Setting OutFile to fd 1 ...
I1209 22:48:45.536326   36489 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:48:45.536341   36489 out.go:358] Setting ErrFile to fd 2...
I1209 22:48:45.536347   36489 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 22:48:45.536576   36489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
I1209 22:48:45.537198   36489 config.go:182] Loaded profile config "functional-967202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 22:48:45.537746   36489 config.go:182] Loaded profile config "functional-967202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 22:48:45.538120   36489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 22:48:45.538188   36489 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 22:48:45.553837   36489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36871
I1209 22:48:45.554342   36489 main.go:141] libmachine: () Calling .GetVersion
I1209 22:48:45.554936   36489 main.go:141] libmachine: Using API Version  1
I1209 22:48:45.554964   36489 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 22:48:45.555321   36489 main.go:141] libmachine: () Calling .GetMachineName
I1209 22:48:45.555514   36489 main.go:141] libmachine: (functional-967202) Calling .GetState
I1209 22:48:45.557341   36489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 22:48:45.557387   36489 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 22:48:45.572468   36489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
I1209 22:48:45.572986   36489 main.go:141] libmachine: () Calling .GetVersion
I1209 22:48:45.573498   36489 main.go:141] libmachine: Using API Version  1
I1209 22:48:45.573526   36489 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 22:48:45.573917   36489 main.go:141] libmachine: () Calling .GetMachineName
I1209 22:48:45.574108   36489 main.go:141] libmachine: (functional-967202) Calling .DriverName
I1209 22:48:45.574333   36489 ssh_runner.go:195] Run: systemctl --version
I1209 22:48:45.574370   36489 main.go:141] libmachine: (functional-967202) Calling .GetSSHHostname
I1209 22:48:45.577681   36489 main.go:141] libmachine: (functional-967202) DBG | domain functional-967202 has defined MAC address 52:54:00:af:0a:5b in network mk-functional-967202
I1209 22:48:45.578092   36489 main.go:141] libmachine: (functional-967202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:0a:5b", ip: ""} in network mk-functional-967202: {Iface:virbr1 ExpiryTime:2024-12-09 23:45:32 +0000 UTC Type:0 Mac:52:54:00:af:0a:5b Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:functional-967202 Clientid:01:52:54:00:af:0a:5b}
I1209 22:48:45.578132   36489 main.go:141] libmachine: (functional-967202) DBG | domain functional-967202 has defined IP address 192.168.50.72 and MAC address 52:54:00:af:0a:5b in network mk-functional-967202
I1209 22:48:45.578300   36489 main.go:141] libmachine: (functional-967202) Calling .GetSSHPort
I1209 22:48:45.578475   36489 main.go:141] libmachine: (functional-967202) Calling .GetSSHKeyPath
I1209 22:48:45.578645   36489 main.go:141] libmachine: (functional-967202) Calling .GetSSHUsername
I1209 22:48:45.578784   36489 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/functional-967202/id_rsa Username:docker}
I1209 22:48:45.661884   36489 build_images.go:161] Building image from path: /tmp/build.694257595.tar
I1209 22:48:45.661965   36489 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1209 22:48:45.672293   36489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.694257595.tar
I1209 22:48:45.676248   36489 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.694257595.tar: stat -c "%s %y" /var/lib/minikube/build/build.694257595.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.694257595.tar': No such file or directory
I1209 22:48:45.676278   36489 ssh_runner.go:362] scp /tmp/build.694257595.tar --> /var/lib/minikube/build/build.694257595.tar (3072 bytes)
I1209 22:48:45.701291   36489 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.694257595
I1209 22:48:45.712551   36489 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.694257595 -xf /var/lib/minikube/build/build.694257595.tar
I1209 22:48:45.722973   36489 crio.go:315] Building image: /var/lib/minikube/build/build.694257595
I1209 22:48:45.723038   36489 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-967202 /var/lib/minikube/build/build.694257595 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1209 22:48:48.795328   36489 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-967202 /var/lib/minikube/build/build.694257595 --cgroup-manager=cgroupfs: (3.072265112s)
I1209 22:48:48.795426   36489 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.694257595
I1209 22:48:48.826968   36489 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.694257595.tar
I1209 22:48:48.864750   36489 build_images.go:217] Built localhost/my-image:functional-967202 from /tmp/build.694257595.tar
I1209 22:48:48.864790   36489 build_images.go:133] succeeded building to: functional-967202
I1209 22:48:48.864795   36489 build_images.go:134] failed building to: 
I1209 22:48:48.864851   36489 main.go:141] libmachine: Making call to close driver server
I1209 22:48:48.864865   36489 main.go:141] libmachine: (functional-967202) Calling .Close
I1209 22:48:48.865166   36489 main.go:141] libmachine: Successfully made call to close driver server
I1209 22:48:48.865185   36489 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 22:48:48.865194   36489 main.go:141] libmachine: Making call to close driver server
I1209 22:48:48.865201   36489 main.go:141] libmachine: (functional-967202) Calling .Close
I1209 22:48:48.865204   36489 main.go:141] libmachine: (functional-967202) DBG | Closing plugin on server side
I1209 22:48:48.865445   36489 main.go:141] libmachine: Successfully made call to close driver server
I1209 22:48:48.865462   36489 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 22:48:48.865507   36489 main.go:141] libmachine: (functional-967202) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.520053176s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-967202
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-967202 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-967202 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-sbr7f" [00d2998f-a894-4f2d-b3fb-25795ee39723] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-sbr7f" [00d2998f-a894-4f2d-b3fb-25795ee39723] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.005234748s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-967202 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-967202 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-967202 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 34381: os: process already finished
helpers_test.go:502: unable to terminate pid 34393: os: process already finished
helpers_test.go:502: unable to terminate pid 34432: os: process already finished
helpers_test.go:508: unable to kill pid 34358: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-967202 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 image load --daemon kicbase/echo-server:functional-967202 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-967202 image load --daemon kicbase/echo-server:functional-967202 --alsologtostderr: (1.166263191s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-967202 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-967202 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [cfe9ed88-cc09-403d-a0dc-597ae44f1632] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [cfe9ed88-cc09-403d-a0dc-597ae44f1632] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.00646044s
I1209 22:48:28.192357   26253 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 image load --daemon kicbase/echo-server:functional-967202 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-967202
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 image load --daemon kicbase/echo-server:functional-967202 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-967202 image load --daemon kicbase/echo-server:functional-967202 --alsologtostderr: (1.68799523s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 image save kicbase/echo-server:functional-967202 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 image rm kicbase/echo-server:functional-967202 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-967202
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 image save --daemon kicbase/echo-server:functional-967202 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-967202 image save --daemon kicbase/echo-server:functional-967202 --alsologtostderr: (1.555225269s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-967202
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 service list -o json
functional_test.go:1494: Took "501.856582ms" to run "out/minikube-linux-amd64 -p functional-967202 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.50.72:30825
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.50.72:30825
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-967202 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.52.189 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-967202 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "354.328632ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "51.066922ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-967202 /tmp/TestFunctionalparallelMountCmdany-port1276549715/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733784508872022721" to /tmp/TestFunctionalparallelMountCmdany-port1276549715/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733784508872022721" to /tmp/TestFunctionalparallelMountCmdany-port1276549715/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733784508872022721" to /tmp/TestFunctionalparallelMountCmdany-port1276549715/001/test-1733784508872022721
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-967202 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (245.650283ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 22:48:29.118001   26253 retry.go:31] will retry after 534.248334ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  9 22:48 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  9 22:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  9 22:48 test-1733784508872022721
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh cat /mount-9p/test-1733784508872022721
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-967202 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [fc92dc21-f764-4dc2-9af7-d70918bddfe3] Pending
helpers_test.go:344: "busybox-mount" [fc92dc21-f764-4dc2-9af7-d70918bddfe3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [fc92dc21-f764-4dc2-9af7-d70918bddfe3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [fc92dc21-f764-4dc2-9af7-d70918bddfe3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004625242s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-967202 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-967202 /tmp/TestFunctionalparallelMountCmdany-port1276549715/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "262.540057ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "45.998574ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-967202 /tmp/TestFunctionalparallelMountCmdspecific-port2405245192/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-967202 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (257.519951ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 22:48:36.855442   26253 retry.go:31] will retry after 442.665017ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-967202 /tmp/TestFunctionalparallelMountCmdspecific-port2405245192/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-967202 ssh "sudo umount -f /mount-9p": exit status 1 (251.192858ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-967202 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-967202 /tmp/TestFunctionalparallelMountCmdspecific-port2405245192/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-967202 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1080921672/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-967202 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1080921672/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-967202 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1080921672/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-967202 ssh "findmnt -T" /mount1: exit status 1 (265.662765ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 22:48:38.694000   26253 retry.go:31] will retry after 271.67858ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-967202 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-967202 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-967202 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1080921672/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-967202 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1080921672/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-967202 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1080921672/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-967202
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-967202
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-967202
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (194.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-920193 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1209 22:50:26.333134   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:50:54.037167   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-920193 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m14.15732673s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (194.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-920193 -- rollout status deployment/busybox: (3.787319703s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- exec busybox-7dff88458-4dbs2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- exec busybox-7dff88458-rkqdv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- exec busybox-7dff88458-zshqx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- exec busybox-7dff88458-4dbs2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- exec busybox-7dff88458-rkqdv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- exec busybox-7dff88458-zshqx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- exec busybox-7dff88458-4dbs2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- exec busybox-7dff88458-rkqdv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- exec busybox-7dff88458-zshqx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- exec busybox-7dff88458-4dbs2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- exec busybox-7dff88458-4dbs2 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- exec busybox-7dff88458-rkqdv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- exec busybox-7dff88458-rkqdv -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- exec busybox-7dff88458-zshqx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-920193 -- exec busybox-7dff88458-zshqx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-920193 -v=7 --alsologtostderr
E1209 22:53:12.522105   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:53:12.528545   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:53:12.539887   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:53:12.561355   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:53:12.602795   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:53:12.684301   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:53:12.846572   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:53:13.168352   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:53:13.810348   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:53:15.092374   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1209 22:53:17.654770   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-920193 -v=7 --alsologtostderr: (57.161752272s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr
E1209 22:53:22.776545   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-920193 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp testdata/cp-test.txt ha-920193:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp ha-920193:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3609340860/001/cp-test_ha-920193.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp ha-920193:/home/docker/cp-test.txt ha-920193-m02:/home/docker/cp-test_ha-920193_ha-920193-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m02 "sudo cat /home/docker/cp-test_ha-920193_ha-920193-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp ha-920193:/home/docker/cp-test.txt ha-920193-m03:/home/docker/cp-test_ha-920193_ha-920193-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m03 "sudo cat /home/docker/cp-test_ha-920193_ha-920193-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp ha-920193:/home/docker/cp-test.txt ha-920193-m04:/home/docker/cp-test_ha-920193_ha-920193-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m04 "sudo cat /home/docker/cp-test_ha-920193_ha-920193-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp testdata/cp-test.txt ha-920193-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp ha-920193-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3609340860/001/cp-test_ha-920193-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp ha-920193-m02:/home/docker/cp-test.txt ha-920193:/home/docker/cp-test_ha-920193-m02_ha-920193.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193 "sudo cat /home/docker/cp-test_ha-920193-m02_ha-920193.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp ha-920193-m02:/home/docker/cp-test.txt ha-920193-m03:/home/docker/cp-test_ha-920193-m02_ha-920193-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m03 "sudo cat /home/docker/cp-test_ha-920193-m02_ha-920193-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp ha-920193-m02:/home/docker/cp-test.txt ha-920193-m04:/home/docker/cp-test_ha-920193-m02_ha-920193-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m04 "sudo cat /home/docker/cp-test_ha-920193-m02_ha-920193-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp testdata/cp-test.txt ha-920193-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3609340860/001/cp-test_ha-920193-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt ha-920193:/home/docker/cp-test_ha-920193-m03_ha-920193.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193 "sudo cat /home/docker/cp-test_ha-920193-m03_ha-920193.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt ha-920193-m02:/home/docker/cp-test_ha-920193-m03_ha-920193-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m02 "sudo cat /home/docker/cp-test_ha-920193-m03_ha-920193-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp ha-920193-m03:/home/docker/cp-test.txt ha-920193-m04:/home/docker/cp-test_ha-920193-m03_ha-920193-m04.txt
E1209 22:53:33.018433   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m04 "sudo cat /home/docker/cp-test_ha-920193-m03_ha-920193-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp testdata/cp-test.txt ha-920193-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3609340860/001/cp-test_ha-920193-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt ha-920193:/home/docker/cp-test_ha-920193-m04_ha-920193.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193 "sudo cat /home/docker/cp-test_ha-920193-m04_ha-920193.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt ha-920193-m02:/home/docker/cp-test_ha-920193-m04_ha-920193-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m02 "sudo cat /home/docker/cp-test_ha-920193-m04_ha-920193-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 cp ha-920193-m04:/home/docker/cp-test.txt ha-920193-m03:/home/docker/cp-test_ha-920193-m04_ha-920193-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 ssh -n ha-920193-m03 "sudo cat /home/docker/cp-test_ha-920193-m04_ha-920193-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-920193 node delete m03 -v=7 --alsologtostderr: (15.80645217s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (362.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-920193 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1209 23:05:26.332810   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:08:12.523000   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:09:35.591058   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:10:26.333433   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-920193 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (6m1.28656195s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (362.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-920193 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-920193 --control-plane -v=7 --alsologtostderr: (1m17.074913897s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-920193 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.36s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-998694 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1209 23:13:12.521988   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-998694 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m18.364044421s)
--- PASS: TestJSONOutput/start/Command (78.36s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-998694 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-998694 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-998694 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-998694 --output=json --user=testUser: (7.342742464s)
--- PASS: TestJSONOutput/stop/Command (7.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-755586 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-755586 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.985862ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"81004c84-0530-4c04-b399-39f459169088","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-755586] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"31f0c7a6-7034-42e4-ba6c-53d65432b5a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19888"}}
	{"specversion":"1.0","id":"1fdc814e-f0d0-4989-9e1f-93c7c9b8dd18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8ce6d2d8-48aa-462c-9571-663222ed8663","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig"}}
	{"specversion":"1.0","id":"8288f5e1-f736-4414-99f6-13f044d07700","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube"}}
	{"specversion":"1.0","id":"4dc7da2d-1cc7-4740-a15e-533d133b6549","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"33b3afb5-c089-4b30-9f2c-a6d511d99c51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"35040313-1c68-4f06-8760-2941c8132b0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-755586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-755586
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (87.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-432904 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-432904 --driver=kvm2  --container-runtime=crio: (39.477471325s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-442660 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-442660 --driver=kvm2  --container-runtime=crio: (45.082962327s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-432904
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-442660
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-442660" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-442660
helpers_test.go:175: Cleaning up "first-432904" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-432904
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-432904: (1.011211477s)
--- PASS: TestMinikubeProfile (87.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-469825 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1209 23:15:26.333111   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-469825 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.874829514s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-469825 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-469825 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-486382 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-486382 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.280227094s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-486382 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-486382 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-469825 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-486382 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-486382 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-486382
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-486382: (1.291318253s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.52s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-486382
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-486382: (22.519362633s)
--- PASS: TestMountStart/serial/RestartStopped (23.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-486382 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-486382 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-555395 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1209 23:18:12.522133   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-555395 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m46.365611063s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555395 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555395 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-555395 -- rollout status deployment/busybox: (3.469810966s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555395 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555395 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555395 -- exec busybox-7dff88458-6jcdr -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555395 -- exec busybox-7dff88458-nthcc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555395 -- exec busybox-7dff88458-6jcdr -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555395 -- exec busybox-7dff88458-nthcc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555395 -- exec busybox-7dff88458-6jcdr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555395 -- exec busybox-7dff88458-nthcc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555395 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555395 -- exec busybox-7dff88458-6jcdr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555395 -- exec busybox-7dff88458-6jcdr -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555395 -- exec busybox-7dff88458-nthcc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555395 -- exec busybox-7dff88458-nthcc -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-555395 -v 3 --alsologtostderr
E1209 23:18:29.402585   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-555395 -v 3 --alsologtostderr: (51.461332029s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.01s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-555395 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 cp testdata/cp-test.txt multinode-555395:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 ssh -n multinode-555395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 cp multinode-555395:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile813365822/001/cp-test_multinode-555395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 ssh -n multinode-555395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 cp multinode-555395:/home/docker/cp-test.txt multinode-555395-m02:/home/docker/cp-test_multinode-555395_multinode-555395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 ssh -n multinode-555395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 ssh -n multinode-555395-m02 "sudo cat /home/docker/cp-test_multinode-555395_multinode-555395-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 cp multinode-555395:/home/docker/cp-test.txt multinode-555395-m03:/home/docker/cp-test_multinode-555395_multinode-555395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 ssh -n multinode-555395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 ssh -n multinode-555395-m03 "sudo cat /home/docker/cp-test_multinode-555395_multinode-555395-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 cp testdata/cp-test.txt multinode-555395-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 ssh -n multinode-555395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 cp multinode-555395-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile813365822/001/cp-test_multinode-555395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 ssh -n multinode-555395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 cp multinode-555395-m02:/home/docker/cp-test.txt multinode-555395:/home/docker/cp-test_multinode-555395-m02_multinode-555395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 ssh -n multinode-555395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 ssh -n multinode-555395 "sudo cat /home/docker/cp-test_multinode-555395-m02_multinode-555395.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 cp multinode-555395-m02:/home/docker/cp-test.txt multinode-555395-m03:/home/docker/cp-test_multinode-555395-m02_multinode-555395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 ssh -n multinode-555395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 ssh -n multinode-555395-m03 "sudo cat /home/docker/cp-test_multinode-555395-m02_multinode-555395-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 cp testdata/cp-test.txt multinode-555395-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 ssh -n multinode-555395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 cp multinode-555395-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile813365822/001/cp-test_multinode-555395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 ssh -n multinode-555395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 cp multinode-555395-m03:/home/docker/cp-test.txt multinode-555395:/home/docker/cp-test_multinode-555395-m03_multinode-555395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 ssh -n multinode-555395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 ssh -n multinode-555395 "sudo cat /home/docker/cp-test_multinode-555395-m03_multinode-555395.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 cp multinode-555395-m03:/home/docker/cp-test.txt multinode-555395-m02:/home/docker/cp-test_multinode-555395-m03_multinode-555395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 ssh -n multinode-555395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 ssh -n multinode-555395-m02 "sudo cat /home/docker/cp-test_multinode-555395-m03_multinode-555395-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-555395 node stop m03: (1.421392699s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-555395 status: exit status 7 (419.558163ms)

                                                
                                                
-- stdout --
	multinode-555395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-555395-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-555395-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-555395 status --alsologtostderr: exit status 7 (417.35961ms)

                                                
                                                
-- stdout --
	multinode-555395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-555395-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-555395-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:19:29.928395   53969 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:19:29.928492   53969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:19:29.928496   53969 out.go:358] Setting ErrFile to fd 2...
	I1209 23:19:29.928499   53969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:19:29.928671   53969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 23:19:29.928825   53969 out.go:352] Setting JSON to false
	I1209 23:19:29.928851   53969 mustload.go:65] Loading cluster: multinode-555395
	I1209 23:19:29.928974   53969 notify.go:220] Checking for updates...
	I1209 23:19:29.929268   53969 config.go:182] Loaded profile config "multinode-555395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:19:29.929288   53969 status.go:174] checking status of multinode-555395 ...
	I1209 23:19:29.929685   53969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:19:29.929722   53969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:19:29.946123   53969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40553
	I1209 23:19:29.946654   53969 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:19:29.947243   53969 main.go:141] libmachine: Using API Version  1
	I1209 23:19:29.947265   53969 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:19:29.947599   53969 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:19:29.947785   53969 main.go:141] libmachine: (multinode-555395) Calling .GetState
	I1209 23:19:29.949399   53969 status.go:371] multinode-555395 host status = "Running" (err=<nil>)
	I1209 23:19:29.949419   53969 host.go:66] Checking if "multinode-555395" exists ...
	I1209 23:19:29.949827   53969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:19:29.949874   53969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:19:29.965993   53969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38243
	I1209 23:19:29.966437   53969 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:19:29.966935   53969 main.go:141] libmachine: Using API Version  1
	I1209 23:19:29.966958   53969 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:19:29.967241   53969 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:19:29.967412   53969 main.go:141] libmachine: (multinode-555395) Calling .GetIP
	I1209 23:19:29.970059   53969 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:19:29.970458   53969 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:19:29.970510   53969 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:19:29.970554   53969 host.go:66] Checking if "multinode-555395" exists ...
	I1209 23:19:29.970973   53969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:19:29.971043   53969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:19:29.987522   53969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46489
	I1209 23:19:29.988014   53969 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:19:29.988563   53969 main.go:141] libmachine: Using API Version  1
	I1209 23:19:29.988594   53969 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:19:29.988887   53969 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:19:29.989058   53969 main.go:141] libmachine: (multinode-555395) Calling .DriverName
	I1209 23:19:29.989381   53969 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 23:19:29.989430   53969 main.go:141] libmachine: (multinode-555395) Calling .GetSSHHostname
	I1209 23:19:29.992407   53969 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:19:29.992795   53969 main.go:141] libmachine: (multinode-555395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:7a:66", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:16:50 +0000 UTC Type:0 Mac:52:54:00:f8:7a:66 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-555395 Clientid:01:52:54:00:f8:7a:66}
	I1209 23:19:29.992828   53969 main.go:141] libmachine: (multinode-555395) DBG | domain multinode-555395 has defined IP address 192.168.39.48 and MAC address 52:54:00:f8:7a:66 in network mk-multinode-555395
	I1209 23:19:29.992949   53969 main.go:141] libmachine: (multinode-555395) Calling .GetSSHPort
	I1209 23:19:29.993131   53969 main.go:141] libmachine: (multinode-555395) Calling .GetSSHKeyPath
	I1209 23:19:29.993261   53969 main.go:141] libmachine: (multinode-555395) Calling .GetSSHUsername
	I1209 23:19:29.993394   53969 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/multinode-555395/id_rsa Username:docker}
	I1209 23:19:30.074556   53969 ssh_runner.go:195] Run: systemctl --version
	I1209 23:19:30.080797   53969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 23:19:30.095917   53969 kubeconfig.go:125] found "multinode-555395" server: "https://192.168.39.48:8443"
	I1209 23:19:30.095959   53969 api_server.go:166] Checking apiserver status ...
	I1209 23:19:30.095996   53969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:19:30.108835   53969 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1043/cgroup
	W1209 23:19:30.117732   53969 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1043/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1209 23:19:30.117812   53969 ssh_runner.go:195] Run: ls
	I1209 23:19:30.122186   53969 api_server.go:253] Checking apiserver healthz at https://192.168.39.48:8443/healthz ...
	I1209 23:19:30.126359   53969 api_server.go:279] https://192.168.39.48:8443/healthz returned 200:
	ok
	I1209 23:19:30.126382   53969 status.go:463] multinode-555395 apiserver status = Running (err=<nil>)
	I1209 23:19:30.126391   53969 status.go:176] multinode-555395 status: &{Name:multinode-555395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 23:19:30.126407   53969 status.go:174] checking status of multinode-555395-m02 ...
	I1209 23:19:30.126683   53969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:19:30.126720   53969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:19:30.142530   53969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46519
	I1209 23:19:30.142908   53969 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:19:30.143419   53969 main.go:141] libmachine: Using API Version  1
	I1209 23:19:30.143443   53969 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:19:30.143767   53969 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:19:30.143934   53969 main.go:141] libmachine: (multinode-555395-m02) Calling .GetState
	I1209 23:19:30.145446   53969 status.go:371] multinode-555395-m02 host status = "Running" (err=<nil>)
	I1209 23:19:30.145459   53969 host.go:66] Checking if "multinode-555395-m02" exists ...
	I1209 23:19:30.145731   53969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:19:30.145766   53969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:19:30.161473   53969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41295
	I1209 23:19:30.161857   53969 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:19:30.162431   53969 main.go:141] libmachine: Using API Version  1
	I1209 23:19:30.162458   53969 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:19:30.162799   53969 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:19:30.162964   53969 main.go:141] libmachine: (multinode-555395-m02) Calling .GetIP
	I1209 23:19:30.165393   53969 main.go:141] libmachine: (multinode-555395-m02) DBG | domain multinode-555395-m02 has defined MAC address 52:54:00:e5:cd:ce in network mk-multinode-555395
	I1209 23:19:30.165761   53969 main.go:141] libmachine: (multinode-555395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:cd:ce", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:17:50 +0000 UTC Type:0 Mac:52:54:00:e5:cd:ce Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-555395-m02 Clientid:01:52:54:00:e5:cd:ce}
	I1209 23:19:30.165795   53969 main.go:141] libmachine: (multinode-555395-m02) DBG | domain multinode-555395-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:e5:cd:ce in network mk-multinode-555395
	I1209 23:19:30.165871   53969 host.go:66] Checking if "multinode-555395-m02" exists ...
	I1209 23:19:30.166208   53969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:19:30.166248   53969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:19:30.181537   53969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35295
	I1209 23:19:30.181931   53969 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:19:30.182475   53969 main.go:141] libmachine: Using API Version  1
	I1209 23:19:30.182496   53969 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:19:30.182809   53969 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:19:30.182977   53969 main.go:141] libmachine: (multinode-555395-m02) Calling .DriverName
	I1209 23:19:30.183148   53969 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 23:19:30.183166   53969 main.go:141] libmachine: (multinode-555395-m02) Calling .GetSSHHostname
	I1209 23:19:30.185931   53969 main.go:141] libmachine: (multinode-555395-m02) DBG | domain multinode-555395-m02 has defined MAC address 52:54:00:e5:cd:ce in network mk-multinode-555395
	I1209 23:19:30.186281   53969 main.go:141] libmachine: (multinode-555395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:cd:ce", ip: ""} in network mk-multinode-555395: {Iface:virbr1 ExpiryTime:2024-12-10 00:17:50 +0000 UTC Type:0 Mac:52:54:00:e5:cd:ce Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-555395-m02 Clientid:01:52:54:00:e5:cd:ce}
	I1209 23:19:30.186319   53969 main.go:141] libmachine: (multinode-555395-m02) DBG | domain multinode-555395-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:e5:cd:ce in network mk-multinode-555395
	I1209 23:19:30.186478   53969 main.go:141] libmachine: (multinode-555395-m02) Calling .GetSSHPort
	I1209 23:19:30.186689   53969 main.go:141] libmachine: (multinode-555395-m02) Calling .GetSSHKeyPath
	I1209 23:19:30.186801   53969 main.go:141] libmachine: (multinode-555395-m02) Calling .GetSSHUsername
	I1209 23:19:30.186944   53969 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19888-18950/.minikube/machines/multinode-555395-m02/id_rsa Username:docker}
	I1209 23:19:30.266588   53969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 23:19:30.279277   53969 status.go:176] multinode-555395-m02 status: &{Name:multinode-555395-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1209 23:19:30.279314   53969 status.go:174] checking status of multinode-555395-m03 ...
	I1209 23:19:30.279664   53969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:19:30.279701   53969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:19:30.295757   53969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34191
	I1209 23:19:30.296230   53969 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:19:30.296762   53969 main.go:141] libmachine: Using API Version  1
	I1209 23:19:30.296782   53969 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:19:30.297079   53969 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:19:30.297246   53969 main.go:141] libmachine: (multinode-555395-m03) Calling .GetState
	I1209 23:19:30.298829   53969 status.go:371] multinode-555395-m03 host status = "Stopped" (err=<nil>)
	I1209 23:19:30.298841   53969 status.go:384] host is not running, skipping remaining checks
	I1209 23:19:30.298846   53969 status.go:176] multinode-555395-m03 status: &{Name:multinode-555395-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-555395 node start m03 -v=7 --alsologtostderr: (37.86243891s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-555395 node delete m03: (1.424340558s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (206.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-555395 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1209 23:28:12.527197   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:30:26.332809   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-555395 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m25.817832453s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555395 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (206.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-555395
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-555395-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-555395-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (69.275651ms)

                                                
                                                
-- stdout --
	* [multinode-555395-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-555395-m02' is duplicated with machine name 'multinode-555395-m02' in profile 'multinode-555395'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-555395-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-555395-m03 --driver=kvm2  --container-runtime=crio: (43.990963211s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-555395
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-555395: exit status 80 (221.879021ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-555395 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-555395-m03 already exists in multinode-555395-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-555395-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.11s)

                                                
                                    
x
+
TestScheduledStopUnix (114.23s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-618374 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-618374 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.633276741s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-618374 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-618374 -n scheduled-stop-618374
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-618374 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1209 23:37:21.071826   26253 retry.go:31] will retry after 53.888µs: open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/scheduled-stop-618374/pid: no such file or directory
I1209 23:37:21.072968   26253 retry.go:31] will retry after 130.145µs: open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/scheduled-stop-618374/pid: no such file or directory
I1209 23:37:21.074100   26253 retry.go:31] will retry after 207.704µs: open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/scheduled-stop-618374/pid: no such file or directory
I1209 23:37:21.075252   26253 retry.go:31] will retry after 182.201µs: open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/scheduled-stop-618374/pid: no such file or directory
I1209 23:37:21.076384   26253 retry.go:31] will retry after 752.623µs: open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/scheduled-stop-618374/pid: no such file or directory
I1209 23:37:21.077494   26253 retry.go:31] will retry after 588.708µs: open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/scheduled-stop-618374/pid: no such file or directory
I1209 23:37:21.078615   26253 retry.go:31] will retry after 1.596592ms: open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/scheduled-stop-618374/pid: no such file or directory
I1209 23:37:21.080826   26253 retry.go:31] will retry after 2.096063ms: open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/scheduled-stop-618374/pid: no such file or directory
I1209 23:37:21.084023   26253 retry.go:31] will retry after 1.606461ms: open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/scheduled-stop-618374/pid: no such file or directory
I1209 23:37:21.086243   26253 retry.go:31] will retry after 3.010865ms: open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/scheduled-stop-618374/pid: no such file or directory
I1209 23:37:21.089439   26253 retry.go:31] will retry after 3.203106ms: open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/scheduled-stop-618374/pid: no such file or directory
I1209 23:37:21.093671   26253 retry.go:31] will retry after 4.386942ms: open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/scheduled-stop-618374/pid: no such file or directory
I1209 23:37:21.098874   26253 retry.go:31] will retry after 12.182297ms: open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/scheduled-stop-618374/pid: no such file or directory
I1209 23:37:21.112111   26253 retry.go:31] will retry after 16.848043ms: open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/scheduled-stop-618374/pid: no such file or directory
I1209 23:37:21.129393   26253 retry.go:31] will retry after 30.992825ms: open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/scheduled-stop-618374/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-618374 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-618374 -n scheduled-stop-618374
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-618374
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-618374 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1209 23:38:12.527857   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-618374
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-618374: exit status 7 (70.29548ms)

                                                
                                                
-- stdout --
	scheduled-stop-618374
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-618374 -n scheduled-stop-618374
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-618374 -n scheduled-stop-618374: exit status 7 (63.99521ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-618374" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-618374
--- PASS: TestScheduledStopUnix (114.23s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (199.39s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1525172127 start -p running-upgrade-792835 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1525172127 start -p running-upgrade-792835 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m49.988418741s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-792835 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-792835 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m27.634451823s)
helpers_test.go:175: Cleaning up "running-upgrade-792835" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-792835
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-792835: (1.221599631s)
--- PASS: TestRunningBinaryUpgrade (199.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (167.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2903247470 start -p stopped-upgrade-992578 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2903247470 start -p stopped-upgrade-992578 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m37.792481309s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2903247470 -p stopped-upgrade-992578 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2903247470 -p stopped-upgrade-992578 stop: (1.534014118s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-992578 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1209 23:40:26.332831   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-992578 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m8.217953051s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (167.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-030585 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-030585 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (115.010563ms)

                                                
                                                
-- stdout --
	* [false-030585] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:38:35.449302   61706 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:38:35.449938   61706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:38:35.449954   61706 out.go:358] Setting ErrFile to fd 2...
	I1209 23:38:35.449961   61706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:38:35.450436   61706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-18950/.minikube/bin
	I1209 23:38:35.451437   61706 out.go:352] Setting JSON to false
	I1209 23:38:35.452455   61706 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8466,"bootTime":1733779049,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:38:35.452551   61706 start.go:139] virtualization: kvm guest
	I1209 23:38:35.454677   61706 out.go:177] * [false-030585] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:38:35.456155   61706 notify.go:220] Checking for updates...
	I1209 23:38:35.456223   61706 out.go:177]   - MINIKUBE_LOCATION=19888
	I1209 23:38:35.457740   61706 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:38:35.459140   61706 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	I1209 23:38:35.460565   61706 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	I1209 23:38:35.462002   61706 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:38:35.463508   61706 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:38:35.465183   61706 config.go:182] Loaded profile config "kubernetes-upgrade-996806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 23:38:35.465285   61706 config.go:182] Loaded profile config "offline-crio-988358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:38:35.465402   61706 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:38:35.503030   61706 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 23:38:35.504336   61706 start.go:297] selected driver: kvm2
	I1209 23:38:35.504367   61706 start.go:901] validating driver "kvm2" against <nil>
	I1209 23:38:35.504386   61706 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:38:35.506458   61706 out.go:201] 
	W1209 23:38:35.507665   61706 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1209 23:38:35.509121   61706 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-030585 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-030585

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-030585

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-030585

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-030585

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-030585

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-030585

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-030585

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-030585

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-030585

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-030585

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-030585

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-030585" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-030585" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-030585

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-030585"

                                                
                                                
----------------------- debugLogs end: false-030585 [took: 3.049128355s] --------------------------------
helpers_test.go:175: Cleaning up "false-030585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-030585
--- PASS: TestNetworkPlugins/group/false (3.30s)

                                                
                                    
x
+
TestPause/serial/Start (96.82s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-424513 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-424513 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m36.823583341s)
--- PASS: TestPause/serial/Start (96.82s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.17s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-424513 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-424513 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.145455753s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-992578
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-034762 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-034762 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (80.326083ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-034762] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19888
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19888-18950/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-18950/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-034762 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-034762 --driver=kvm2  --container-runtime=crio: (42.222636093s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-034762 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.54s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-424513 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-424513 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-424513 --output=json --layout=cluster: exit status 2 (248.358566ms)

                                                
                                                
-- stdout --
	{"Name":"pause-424513","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-424513","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-424513 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-424513 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-424513 --alsologtostderr -v=5: (1.002419744s)
--- PASS: TestPause/serial/PauseAgain (1.00s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-424513 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-424513 --alsologtostderr -v=5: (1.004533645s)
--- PASS: TestPause/serial/DeletePaused (1.00s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (11.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (11.643848462s)
--- PASS: TestPause/serial/VerifyDeletedResources (11.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (60.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-034762 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-034762 --no-kubernetes --driver=kvm2  --container-runtime=crio: (59.381985044s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-034762 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-034762 status -o json: exit status 2 (258.873788ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-034762","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-034762
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-034762: (1.048210647s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (60.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (47.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-034762 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-034762 --no-kubernetes --driver=kvm2  --container-runtime=crio: (47.638894523s)
--- PASS: TestNoKubernetes/serial/Start (47.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-034762 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-034762 "sudo systemctl is-active --quiet service kubelet": exit status 1 (200.70258ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-034762
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-034762: (1.28576524s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (59.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-034762 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-034762 --driver=kvm2  --container-runtime=crio: (59.741512693s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (59.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (74.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-030585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-030585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m14.252601654s)
--- PASS: TestNetworkPlugins/group/auto/Start (74.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-034762 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-034762 "sudo systemctl is-active --quiet service kubelet": exit status 1 (205.346578ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (96.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-030585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1209 23:45:26.333579   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-030585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m36.510607886s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (96.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-030585 "pgrep -a kubelet"
I1209 23:45:48.563950   26253 config.go:182] Loaded profile config "auto-030585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-030585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9ktvc" [29c372b6-1fec-4de1-b764-96aa61b97503] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9ktvc" [29c372b6-1fec-4de1-b764-96aa61b97503] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004007282s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (26.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-030585 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-030585 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14382284s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 23:46:13.919215   26253 retry.go:31] will retry after 1.164104434s: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context auto-030585 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context auto-030585 exec deployment/netcat -- nslookup kubernetes.default: (10.151337178s)
--- PASS: TestNetworkPlugins/group/auto/DNS (26.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-030585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-030585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-w49hr" [6c2027b7-cdd7-4ff6-ba28-1c5046c10023] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005914474s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-030585 "pgrep -a kubelet"
I1209 23:46:39.338127   26253 config.go:182] Loaded profile config "kindnet-030585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-030585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dmt7b" [f2993cd8-bc7b-4876-b5c7-945fec0f2f50] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dmt7b" [f2993cd8-bc7b-4876-b5c7-945fec0f2f50] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004101694s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (80.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-030585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-030585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m20.479049745s)
--- PASS: TestNetworkPlugins/group/calico/Start (80.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-030585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-030585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-030585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-030585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-030585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m21.606348791s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-030585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-030585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m19.882885484s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (101.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-030585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-030585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m41.548305473s)
--- PASS: TestNetworkPlugins/group/flannel/Start (101.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vnzbb" [f056e17d-512c-453a-9993-6e0b773dc835] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005228179s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-030585 "pgrep -a kubelet"
I1209 23:48:06.896083   26253 config.go:182] Loaded profile config "calico-030585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-030585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fkl5l" [3f499175-954d-4e7e-8f6b-8cef2435e0ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1209 23:48:12.522073   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/functional-967202/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-fkl5l" [3f499175-954d-4e7e-8f6b-8cef2435e0ff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.004484074s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-030585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-030585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-030585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-030585 "pgrep -a kubelet"
I1209 23:48:27.928791   26253 config.go:182] Loaded profile config "custom-flannel-030585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-030585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-khqz8" [216ce04d-c820-4963-a6e1-d2f0981ff6bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-khqz8" [216ce04d-c820-4963-a6e1-d2f0981ff6bb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004573809s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-030585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-030585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m33.004299097s)
--- PASS: TestNetworkPlugins/group/bridge/Start (93.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-030585 "pgrep -a kubelet"
I1209 23:48:38.116376   26253 config.go:182] Loaded profile config "enable-default-cni-030585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-030585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qf6hc" [f83b58b9-0e3c-467e-b27a-491695c44ed0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qf6hc" [f83b58b9-0e3c-467e-b27a-491695c44ed0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004973999s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-030585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-030585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-030585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-030585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-030585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-030585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (128.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-048296 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-048296 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (2m8.862877002s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (128.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ffcj4" [dec1edd1-d619-4a33-a850-db238fb93927] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004066471s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-030585 "pgrep -a kubelet"
I1209 23:49:26.765897   26253 config.go:182] Loaded profile config "flannel-030585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-030585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z4szv" [d84acf22-fde9-4676-a3e4-a129ef40ecdf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z4szv" [d84acf22-fde9-4676-a3e4-a129ef40ecdf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004828668s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-030585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-030585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-030585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-825613 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-825613 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m25.299651705s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-030585 "pgrep -a kubelet"
I1209 23:50:10.967891   26253 config.go:182] Loaded profile config "bridge-030585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-030585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jdsnb" [26b76697-7e67-44d5-a5e5-f6194e5e84e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jdsnb" [26b76697-7e67-44d5-a5e5-f6194e5e84e2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004275727s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-030585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-030585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-030585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)
E1210 00:19:20.544031   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-871210 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1209 23:50:48.762442   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:50:48.768870   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:50:48.780268   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:50:48.801651   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:50:48.843033   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:50:48.924520   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:50:49.086061   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:50:49.408322   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:50:50.050329   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:50:51.331634   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:50:53.893908   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:50:59.015918   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:51:09.258139   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/auto-030585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-871210 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m32.175901857s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-048296 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [623ddd15-80ab-402d-88c6-e4e03bba3e9e] Pending
helpers_test.go:344: "busybox" [623ddd15-80ab-402d-88c6-e4e03bba3e9e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [623ddd15-80ab-402d-88c6-e4e03bba3e9e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003847997s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-048296 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-825613 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [26d6fcf4-98a5-4f18-a823-b7b7ec824711] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [26d6fcf4-98a5-4f18-a823-b7b7ec824711] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005733904s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-825613 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-048296 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-048296 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-825613 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-825613 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.013801576s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-825613 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-871210 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ee8b7476-5ac5-4716-b3cb-4db1509c7925] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1209 23:52:14.081133   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [ee8b7476-5ac5-4716-b3cb-4db1509c7925] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005042925s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-871210 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-871210 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-871210 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (682.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-048296 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1209 23:53:58.826541   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-048296 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (11m21.94963197s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-048296 -n no-preload-048296
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (682.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (576.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-825613 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1209 23:54:09.163316   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/custom-flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:54:16.965100   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/kindnet-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:54:19.308322   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:54:20.544869   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:54:20.551232   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:54:20.562626   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:54:20.584015   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:54:20.625383   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:54:20.707725   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:54:20.869303   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:54:21.191150   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-825613 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (9m36.47430326s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-825613 -n embed-certs-825613
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (576.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (583.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-871210 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1209 23:55:00.271020   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/enable-default-cni-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:01.521122   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/flannel-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:11.199060   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:11.205491   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:11.216917   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:11.238329   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:11.279804   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:11.361252   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:11.522764   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:11.844458   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:12.486509   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:13.768443   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:16.330290   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-871210 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (9m43.062000474s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-871210 -n default-k8s-diff-port-871210
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (583.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-720064 --alsologtostderr -v=3
E1209 23:55:21.451969   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-720064 --alsologtostderr -v=3: (5.482070857s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-720064 -n old-k8s-version-720064
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-720064 -n old-k8s-version-720064: exit status 7 (63.252939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-720064 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-677937 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-677937 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (48.071101425s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-677937 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-677937 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.05852952s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-677937 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-677937 --alsologtostderr -v=3: (11.34133371s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-677937 -n newest-cni-677937
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-677937 -n newest-cni-677937: exit status 7 (66.416852ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-677937 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-677937 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1210 00:20:11.198577   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/bridge-030585/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:20:26.333420   26253 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-18950/.minikube/profiles/addons-495659/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-677937 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (35.624416975s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-677937 -n newest-cni-677937
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-677937 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-677937 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-677937 -n newest-cni-677937
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-677937 -n newest-cni-677937: exit status 2 (258.550985ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-677937 -n newest-cni-677937
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-677937 -n newest-cni-677937: exit status 2 (265.831637ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-677937 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-677937 -n newest-cni-677937
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-677937 -n newest-cni-677937
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                    

Test skip (34/321)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-495659 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-030585 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-030585

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-030585

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-030585

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-030585

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-030585

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-030585

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-030585

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-030585

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-030585

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-030585

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-030585

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-030585" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-030585" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-030585

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-030585"

                                                
                                                
----------------------- debugLogs end: kubenet-030585 [took: 2.914687751s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-030585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-030585
--- SKIP: TestNetworkPlugins/group/kubenet (3.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-030585 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-030585

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-030585

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-030585

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-030585

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-030585

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-030585

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-030585

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-030585

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-030585

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-030585

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-030585

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-030585" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-030585

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-030585

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-030585

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-030585

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-030585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-030585" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-030585

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-030585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030585"

                                                
                                                
----------------------- debugLogs end: cilium-030585 [took: 3.397905643s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-030585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-030585
--- SKIP: TestNetworkPlugins/group/cilium (3.55s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-866797" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-866797
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard